Volume 17, Issue 55 (2024)                   JMED 2024, 17(55): 108-119 | Back to browse issues page

Ethics code: IR.SBMU.RETECH.REC.1399.1222


XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Sajjadi S S, Shomoossi N, Shabani E, Khazaei Feizabad A, Karimkhanlooei G. Dimensionality, discrimination power and difficulty of English test items: the case of graduate exam for healthcare applicants. JMED 2024; 17 (55) :108-119
URL: http://edujournal.zums.ac.ir/article-1-1869-en.html
Zahedan University of Medical Sciences, Zahedan, Iran
Abstract:   (233 Views)
Background & Objective: Administered by the Iranian Center for the Measurement of Medical Education, national university entrance exams are administered nationwide where English constitutes a vital section. This study aimed to assess dimensionality, discrimination power and difficulty of English test items in this graduate entrance exam.
Material & Methods: This quantitative study examined 160 English test items administered to 41633 test-takers applying for graduate studies in Iranian universities of medical sciences in 2021, and reported the characteristics of test takers during three successive years (2019, 2020, and 2021). NOHARM software (version 4.0) was used to analyze the data by examining dimensionality of the tests reporting a two-parameter model.
Results: Generally, female participants outnumbered the male, with a similar pattern among the admitted participants (70% females vs. 30% males). A positively significant correlation was found between participants’ Grade Point Average and English test scores (p < 0.05). In 2021, the results of four administration sessions with a high reliability (i.e. 0.92, 0.88, 0.90 and 0.91) were analyzed separately. Two dimensionality parameters (i.e., difficulty & discrimination) fitted the model while the guessing parameter did not. English tests proved to be “difficult”, with either “high” or “very high” discrimination power. Neither “easy” nor “very easy” items were found. No items were associated with “no” or “very low” discrimination power.
Conclusion: Overall, the tests functioned well; however, more research is required to rigorously evaluate the exams. Improvements concerning the social and long-term effects of these tests are suggested.
Full-Text [PDF 421 kb]   (89 Downloads) |   |   Full-Text (HTML)  (25 Views)  
Article Type : Orginal Research | Subject: Education
Received: 2023/02/25 | Accepted: 2024/02/20 | Published: 2024/09/10

References
1. Dillon G, Boulet J, Hawkins R, Swanson D. Simulations in the United States medical licensing examination™(USMLE™). BMJ Quality & Safety. 2004;13(suppl1):i41-i5. [DOI]
2. Schwartzstein RM, Rosenfeld GC, Hilborn R, Oyewole SH, Mitchell K. Redesigning the MCAT exam: Balancing multiple perspectives. Academic Medicine. 2013;88(5):560-7. [DOI]
3. Séguis B, McElwee S. Assessing clinical communication on the Occupational English Test®. Global perspectives on language assessment: Research, theory, and practice. 2019:63-79. [DOI]
4. Anjali S, Sanjay Z, and Bipin B. India’s foreign medical graduates: an opportunity to correct India’s physician shortage. Educ Health (Abingdon). 2016;29(1):42-6. [DOI]
5. Thappa DM. Jawaharlal Institute of postgraduate medical education and research, pondicherry, India. Journal of Postgraduate Medicine. 2001;47(2):147.
6. Khodi A, Alavi SM, Karami H. Test review of Iranian university entrance exam: English Konkur examination. Language Testing in Asia. 2021;11:1-10. [DOI]
7. Lotfie MM. Language policy and practices in indonesian higher education institutions. Intellectual Discourse, 2018.26(2):p. 683–704-683–704.
8. Karakas, A. Turkish lecturers’ and students’ perceptions of English in English-medium instruction universities. 2016.
9. Özdemir-Yılmazer M. Direct Access to English-Medium Higher Education in Turkey: Variations in Entry Language Scores. Dil Eğitimi ve Araştırmaları Dergisi, 2022.8(2):p.325-345. [DOI]
10. Marandi SS, Tajik L, Zohali L. On the construct validity of the Iranian Ministry of Health Language Exam (MHLE). Journal of Language Horizons. 2020;4(2):9-36. [DOI]
11. Hekmati N, Davoudi M, Zareian G, Elyasi M. English for medical purposes: An investigation into medical students’ English language needs. Iranian Journal of Applied Language Studies. 2020;12(1):151-76. [DOI]
12. ShayesteFar P. A model of interplay between student English achievement and the joint affective factors in a high-stakes test change context: Model construction and validity. Educational Assessment, Evaluation and Accountability. 2020;32(3):335-71. [DOI]
13. Nguyen T, Han H, Kim M, Chan K. An introduction to item response theory for patient-reported outcome measurement. Patient. 2014;7(1):23–35. [DOI]
14. Deng S, Bolt DM. A sequential IRT model for multiple-choice items and a multidimensional extension. Applied Psychological Measurement. 2016;40(4):243-57. [DOI]
15. Baker FB. The basics of item response theory. 2nd ed. ERIC Clearinghouse on Assessment and Evaluation; College Park, MD, USA: 2001.
16. Kim S-H, Kwak M, Bian M, et al. Item response models in psychometrika and psychometric textbooks. Frontiers in Education. 2020 Jun 9 (Vol. 5, p. 63). Frontiers Media SA. [DOI]
17. Sheybani E, Zeraatpishe M. On the dimensionality of reading comprehension tests composed of text comprehension items and cloze test items. International Journal of Language Testing. 2018;8(1):12-26.
18. Ocbian MM, MP. Gamba, and J.D. Ricafort, Admission Test as Predictor of Performance of Students in the English Subject. JPAIR Institutional Research, 2015;6(1):p.34-45.
19. Abdellatif H, Al-Shahran i AM. Effect of blueprinting methods on test difficulty, discrimination, and reliability indices: cross-sectional study in an integrated learning program. Advances in medical education and practice, 2019:p.23-30. [DOI]
20. Baghaei, P. and V. Aryadoust. Modeling local item dependence due to common test format with a multidimensional Rasch model. International Journal of Testing, 2015.15(1):p.71-87. [DOI]
21. Abdellatif, H., Test results with and without blueprinting: Psychometric analysis using the Rasch model. Educación Médica, 2023.24(3):p.100802. [DOI]
22. Hughes, A. Testing for language teachers. 2020: Cambridge university press. [DOI]
23. Ghahraki S, Tavakoli M, Ketabi S. Applying a two-parameter item response model to explore the psychometric properties: The case of the Ministry Of Science, Research And Technology (MSRT) high-stakes English language proficiency test. Two Quarterly Journal of English Language Teaching and Learning University of Tabriz. 2022;14(29):1-26. [DOI]
24. Bazvand AD, Kheirzade S, Ahmadi A. On the statistical and heuristic difficulty estimates of a high stakes test in Iran. International Journal of Assessment Tools in Education. 2019;6(3):330-43. [DOI]
25. Shomoossi N, Rad M, Fiezabadi M, Vaziri E, Amiri M. Understanding the research process and historical trends in English for medical purposes using scientometrics and co-occurrence analysis. Acta Facultatis Medicae Naissensis. 2019;36(3):235-47. [DOI]
26. Shomoossi N, Rad M, Rakhshani MH. Efficacy of English language programs as judged by nurses and students of nursing: Do nurses in Iran need to know English? Acta Facultatis Medicae Naissensis. 2013;30(3):137. [DOI]

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.