Quantification of Content Validity in some Psychological Tests

Document Type : Original Article

Author

Assistant Professor of Educational Psychology, Faculty of Education, Qena, South Valley University

Abstract

This study aimed to use five statistical forms to quantify content validity (content validity ratio , Kappa coefficient k, average congruency percentage , content validity index , and  index) for three psychological intruments (mental defeat scale, boredom scale, and verblizer-visulizer cognive style questionnaire). A panel of (10) content experts were asked to rate each instrument item in terms of relevance, clarity, simplicity, and ambiguity on a four-point ordinal scale.
 
The analysis revealed that:
The index produce the minimum rate for the content validy criteria on the item-level followed by content validity ratio and then Kappa coefficient (k) and finally content validity index  as the maximum rate for all the instruments used.
The index produce the minimum rate for the content validy criteria on the scale-level followed by content validity ratio  followed by Kappa coefficient (k) and then content validity index  and finally average congruency percentage  as the maximum rate for all the instruments used.
There was inconsistency of content validity ratio , Kappa coefficient k, average congruency percentage , content validity index , and rWG index for any instrument on the item-level and scale level.
 
Generaly, quantitative methods used to confirm the content validity of the three new tools increase the amount of information available for detecting the psychometric properties and increase confidence in the content validity of the tools. 

  1.  
    1. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (1999). Standards for educational and psychological testing.Washington, D.C.: American Educational Research Association.

 

 

  1. American Psychological Association (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.
  2. Antonietti, A., & Giorgetti, M. (1998a). A study of some psychometric properties of the Verbalizer-Visualizer questionnaire. Journal of Mental Imagery. 20(3&4), 59-68.
  3. Antonietti, A., & Giorgetti, M. (1998b). The Verbalizer-Visualizer questionnaire: A review. Perceptual and Motor Skills. 86(), 227-239.
  4. Ayre, C., & Scally, A. J. (2014). Critical values for Lawshe’s content validity ratio: Revisiting the original methods of calculation . Measurement and Evaluation in Counseling and Development. 47(1), 79 –86.
  5. Bayrak, F., & Yurdugul, H. (2012). Content validity measures in scale development studies: comparison of content validity index and Kappa statistics. H. U. Journal of Education. 2, 246-271.
  6. Beckstead, J. W. (2009). Content validity is naught. International Journal of Nursing Studies. 46, 1274-1283.
  7. Campos, A., Lopez, A., Gonzalez, M. A., & Amor, A. (2004). Imagery factors in the Spanish version of the verbalizer-visualizer questionnaire. Psychological Reports. 94(), 1149-1154.

10. Carretero-Dios, H., & Pérez, C. (2007). Standards for the development and review of instrumental studies: Considerations about test selection in psychological research. International Journal of Clinical and Health Psychology, 7, 863-882.

11. Cicchetti, D. V. (1984). On a model for assessing the security of infantile attachment: Issues of observer reliability and validity. Behavioral and Brain Sciences, 7, 149-150.

12. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 20, 37-46.

13. Craparo, G., Faraci, P., Fasciano, S., Carrubba, S., & Gori, A. (2013). A factor analytic study of the boredom proneness scale (BPS). Clinical Neuropsychiatry. 10(3,4). 164-170.

14. Davis L.L. (1992). Instrument review: Getting the most from a panel of experts. Applied Nursing Research. 5, 194-197.

15. Delgado-Rico, E., Carretero-Dios, H., & Ruch, W. (2012). The use of expert judges in scale development: Implications for improving face validity of measures of unobservable constructs. International Journal of Clinical and Health Psychology. 12(3), 449-460.

16. DeVellis, R. F. (2003). Scale development: Theory and applications. (2nd ed.). Thousand Oaks, CA:  Sage Publications.

17. Ding, C. S. & Hershberger, S. L. (2002). Assessing content validity and content equivalence using structural equation modeling. Structural Equation Modeling. 9(2), 283-297.

18. Downey, M., Collins, M., & Browning, W. (2002). Predictors of success in dental hygiene education: a six-year review. Journal of Dental Education, 66 (11), 1269-1273.

19. Dursun, P., & Tezer, E. (2013). Turkish adaptation of the boredom proneness scale short-form. Procedia-Social and Behavioral Sciences. (84), 1550-1554.