Volume 17, Issue 53 (2024)                   JMED 2024, 17(53): 63-71 | Back to browse issues page

Ethics code: ODC-2021-14

XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Elsawaay S, Mhanni A, Qutieshat A. Examining dental students' performance in multiple-choice questions and pre-clinical practical exams in fixed prosthodontics: Gender differences, stress, and confidence. JMED 2024; 17 (53) :63-71
URL: http://edujournal.zums.ac.ir/article-1-1901-en.html
Oman Dental College
Full-Text [PDF 415 kb]   (256 Downloads)     |   Abstract (HTML)  (926 Views)
Full-Text:   (313 Views)
 Abstract
Background & Objective:
In dental education, understanding optimal assessment methods and factors like stress and confidence is essential. This research assessed second-year dental students' performance in fixed prosthodontics using multiple-choice questions (MCQs) and pre-clinical practical exams, examining impacts of gender, stress, and confidence.
Materials & Methods: Using a quasi-experimental design, 495 students from a single faculty underwent assessment. Selected via convenience sampling, they were exposed to MCQs and practical exams in fixed prosthodontics. An expert-reviewed questionnaire gauged their stress and confidence. Data was analyzed using descriptive statistics, t-tests, and Pearson's correlation. Additionally, a balanced sub-set of 176 students (88 males and 88 females) was chosen for gender-based analysis.
Results: Findings indicate statistical parity between MCQ and practical exam performances (p>0.001). Females slightly outperformed in MCQs, while males excelled in practicals, without reaching statistical significance (p>0.05). Stress correlated with practical exam outcomes (r=0.34, p=0.001), and confidence with MCQ scores (r=0.41, p<0.0001).
Conclusion: The research underscores near-equivalence of MCQs and practical exams for student assessments in fixed prosthodontics. Recognizing the roles of stress and confidence in assessments offers insights for balanced evaluations. Dental faculties should integrate these findings, and future work should pivot towards tool validations for enriched learning.
 
Introduction
The quality of education in medical and dental fields plays a vital role in shaping the future of healthcare. In a rapidly evolving world, it is essential to continuously evaluate and improve the educational experiences of students in these fields. Among the key aspects of dental education are the assessment methods employed to measure students' progress and determine their readiness to enter the professional world (1, 2). A balanced approach that considers various aspects of learning and skill development is essential to ensure the best outcomes for students and ultimately, the patients they will serve (3, 4).
Assessing the quality of medical and dental education is of paramount importance for determining the effectiveness of educational programs. Evaluating clinical skill acquisition in the cognitive and psychomotor domains of dental students plays a crucial role in assessing teaching methodologies, lesson content, student motivation, and their ability to succeed while also offering valuable feedback on their performance (5, 6).
The evaluation process is an ongoing endeavor, taking into account student achievements, learning progress, and necessary modifications to achieve educational objectives (7). Various methods can be employed to assess the cognitive domain, including multiple choice questions (MCQs), key feature questions, self- and cohort evaluations, and free response examinations such as long essays, short answers, and modified essays (8).
MCQs are frequently employed in evaluating undergraduate medical and dental students' knowledge. These assessments must be valid, reliable, and easily understood by students. Although well-designed MCQs excel at assessing knowledge and factual memory, they are not as effective in gauging students' problem-solving abilities (8, 9). Moreover, the development of high-quality MCQs presents a challenge and necessitates expertise (10).
Assessments used in qualification and in-training exams, such as paper and pencil tests, primarily evaluate cognitive abilities at lower taxonomic levels. This is due to the inherent complexities in administering exams that involve patients, whether simulated or real (11).
As a result, multiple-choice examinations may not offer a comprehensive measure of clinical competence (12, 13). However, clinical performance is underpinned by prior knowledge, and assessing students' understanding of the rationale behind different procedural approaches is crucial, particularly in large cohorts (14). MCQs provide an efficient way to evaluate this theoretical knowledge, which forms the basis for clinical competence. It is essential to recognize that knowledge is only one aspect of clinical competence, and other factors must be considered in the evaluation process (15).
The primary objective of training programs is to produce competent practitioners (6, 16). Thus, dental schools must ensure that their graduates are educated and evaluated according to the intended learning objectives outlined in the curriculum. Students should be provided with ample opportunities in each session to apply their knowledge in practice. Assessments are conducted post-training using a checklist developed with the guidance of teaching staff members. Although creating a checklist demands time and effort, it is crucial for valid and reliable evaluation of clinical performance (17). Additionally, there is evidence suggesting that using the average scores from two examiners for dental students can minimize errors and subjectivity in clinical exams (18, 19).
Understanding the role of stress and confidence in the assessment process is also essential, as these factors can significantly influence a student's performance (20). By examining the relationships between stress, confidence, and academic performance in different exam types, educators can develop strategies to help students better manage their stress and improve their confidence, ultimately enhancing the quality of dental education (21).
This study aimed to evaluate and compare the performance of second-year dental students in multiple-choice questions (MCQs) and pre-clinical practical examinations in fixed prosthodontics, analyze the correlation between the two assessment methods, examine gender differences in performance, and explore the impact of stress and confidence on exam outcomes. The null hypothesis for this study proposes that there are no discernible differences in student performance when comparing MCQs and practical exams, and any performance variations across genders are negligible. Furthermore, it postulates that stress and confidence levels do not have a measurable effect on students' performance in these exams. By examining these factors, the study hopes to provide insights into effective assessment methods for dental education and potential areas for improvement.

Materials & Methods
Design and setting(s)
This research utilized a quasi-experimental design, and it was conducted with the appropriate ethical approval (Ref: ODC-2021-14) in one dental faculty during the academic year (Sep 2021- Sep 2022).
Participants and sampling
A total of 495 second-year dental students, including 88 males and 407 females, were assessed in this study. The participants were selected through convenience sampling from one dental faculty. The study's inclusion criteria were second-year dental students who had completed the Fixed Prosthodontic course. To compare exam performance based on gender and minimize potential biases, the study aimed to achieve equal sample sizes for both male and female students. A random sample reduction technique, applied in previous studies across various domains, was employed (23-27). The number of female students was randomly reduced to equalize the number of students in each gender category, resulting in a final count of N=88 for each gender group. Written informed consent was obtained from all students prior to their participation in the study.
Tools/Instruments
To assess students' knowledge, a set of 50 multiple-choice questions (MCQs) were administered. Correct responses were awarded one point, while incorrect responses received zero points, resulting in a scoring range of 0 to 50. For the practical exam, students were tasked with preparing artificial anterior central incisors to receive all-ceramic crowns within 45 minutes. The examiners evaluated students' performances using the glance and grade assessment method (22). Each student's prepared tooth was assessed by two examiners, and the average score was recorded.
Data collection methods
Students' scores on MCQs and preclinical practical exams were collected. The validity and reliability of the exams were established through expert reviews and pilot testing. For the randomly reduced sample, student confidence and stress levels while attempting both exams were determined using a 5-point Likert scale. Participants were asked to rate their confidence in their ability to pass the exams and their stress levels associated with the possibility of failing the exams.
Data analysis
The practical and MCQ exam scores, as well as confidence and stress level ratings for the randomly reduced sample, were analyzed using SPSS version 22 software. Descriptive statistics were used to summarize the data, while inferential statistics, such as paired t-tests and independent t-tests, were employed to compare the results among different groups and determine if any significant differences existed. A p-value of <0.05 was considered statistically significant. The data from the reduced sample was then analyzed to investigate any potential differences in exam performance, confidence, and stress levels between male and female students using Pearson’s correlation coefficient.
Additionally, to compare the preclinical practical exam scores with the MCQ scores, a conversion of the maximum score for the practical exam (20) to a score out of 50, as the MCQ scores were out of 50, was performed. This was achieved by multiplying the practical exam scores by a conversion factor, calculated as the desired maximum score (50) divided by the original maximum score (20), resulting in a conversion factor of 2.5.
The formula used for the conversion was:
Converted score = Original score * Conversion factor = Original score * 2.5.
This comprehensive methodology allowed for a thorough analysis of student performance across both assessment methods, as well as an understanding of the influence of confidence and stress levels on exam outcomes within the randomly reduced sample. By evaluating these factors, dental faculties can make informed decisions about how best to assess their students and ensure the highest quality of dental education. The study procedures and stages are visually summarized in Figure 1.

Figure 1. Flowchart outlining the stages of the quasi-experimental study on the performance of dental students in multiple-choice and pre-clinical practical exams. The chart starts with the recruitment of participants and ends with the statistical analysis of the data
Results
Table 1 presents the modified Angoff strategy's guidelines for rating student achievement and establishing standards (28). Based on professional judgment in a formal setting, scores below 60% were classified as poor, between 60% and 80% as medium, and above 80% as good. The majority of second-year dental students (495) received poor results on both exams, while a minority achieved passing scores.
Table 1. Standard setting of MCQs and preclinical
Classification Assessment mark MCQ Practice
Desirable Good (80-100%) 15.2% 18.7%
Medium (60-79%) 30.5% 37.4%
Undesirable Poor (>60%) 54.3% 43.9%

The MCQ scores for 492 students followed a normal distribution with a mean of 29 and a standard deviation of (±9.08). The distribution of scores indicates that the questions were varied and targeted different levels of student ability. Preclinical scores exhibited a normal distribution with a mean of 28.97 and a standard deviation of (±10.93) (Figure 2).
The histograms reveal that the mean and standard deviation of the pre-clinical practical scores and MCQ performance scores for the students are 28.97±10.93 and 29.00±9.08, respectively. The pre-clinical practical and MCQ performance scores display relatively similar means and standard deviations. A paired t-test revealed no significant difference between the two exams when comparing the mean scores (t=-0.069, df=491, p>0.001).

Figure 2. An overview of normal distribution for MCQ and preclinical practical scores

To compare the mean scores of the two exams by gender, the number of female students was reduced. Eighty-eight females were randomly selected from the total number of female students (N= 407) using SPSS to compare their performance with that of male students on MCQs and pre-clinical practical exams. The means and standard deviations of the pre-clinical and multiple-choice question scores for the sub-set group were 27.61±9.13 and 27.96±10.83, respectively.
For the sub-set group (88 female+88 male=176), a paired t-test was performed to ensure that it accurately reflects the complete number of students (N=492). The results showed there was no significant difference statistically between the two exams for the sub-set group (t=0.400, df=175, p>0.001).
The results showed that female students performed slightly better on MCQs than male students, whereas males performed slightly better on the preclinical exam than females. However, there was no statistically significant difference between the means of the preclinical practical scores and the means of the MCQs for either gender.
The results show that the majority of second-year dental students received poor scores on both the MCQ and preclinical practical exams, with only a minority achieving passing scores. There was no significant difference between the two exams' mean scores, both for the entire student population and the gender-balanced sub-set. Furthermore, there was no statistically significant difference between the mean scores of male and female students for either exam.
The associations between confidence and performance, as well as clinical skills, were assessed using Pearson's correlation coefficient as the statistical test (Table 2). No significant differences were observed in confidence or stress levels prior to attempting the exam between students who took the MCQ and those who participated in the pre-clinical practical exam.
Table 2. Correlations between stress levels, confidence levels, and performance in practical and MCQ exams
- Stress levels Confidence levels Performance - Practical Performance - MCQ
Stress levels 1.00 - 0.34* (p=0.001) -0.04 (p=0.80)
Confidence levels - 1.00 0.17 (p=0.08) 0.41* (p<0.0001)
Performance - Practical 0.34* (p=0.001) 0.17 (p=0.08) 1.00 0.15
Performance - MCQ -0.04 (p=0.80) 0.41* (p<0.0001) 0.15 1.00
 Values in the table represent Pearson's correlation coefficients (r). The asterisks (*) indicate statistically significant correlations (p<0.001). In this table, stress levels have a negative effect on performance (negative correlation), while confidence levels have a positive effect on performance (positive correlation)
A noteworthy positive correlation was identified between students' stress levels before attempting the exam and their performance in the pre-clinical practical exam (r=0.34, p=0.001). However, no such relationship was found between stress levels and performance in the MCQ exam.
While confidence appeared to be related to the number of errors made during the pre-clinical practical exam, this correlation did not reach statistical significance (r=0.17, p=0.08). The number of mistakes made during the pre-clinical practical exam showed no association with performance on the MCQ exam (r=0.15).
Furthermore, a strong positive correlation was observed between students' confidence in attempting the MCQ exam and their performance on the exam (r=0.41, p<0.0001). These findings suggest that confidence and stress levels may play a role in students' performance, particularly in the context of pre-clinical practical exams. None of the variables showed significant correlation when comparing gender differences.
Based on these results, the hypothesis was partially rejected.

Discussion
The evaluation of student competency necessitates a variety of assessment methods, each with a unique focus, aligning with the broader assessment objectives (29). Our study employed both multiple-choice questions (MCQs) and practical exams, enabling a thorough appraisal of students' theoretical knowledge and practical skills respectively. A singular assessment approach, often favored due to convenience and time efficiency, may not adequately represent all educational objectives (30).
The use of MCQs, a prevalent tool for assessing professional doctoral students, especially in large groups (31), is efficient for assessing theoretical knowledge. However, it may fall short when assessing students' problem-solving abilities. The validity of MCQs is another area of concern, as they can potentially encourage surface learning, where students memorize facts rather than grasping underlying concepts (32-34). These concerns mirror our observations, further emphasizing the need for practical exams to evaluate students' applied knowledge and competencies effectively (35).
In our study, practical exams involved dental students preparing an artificial tooth, evaluated using the 'glance and grade' method. The average scores of two examiners were used to mitigate examiner variability, a known issue in practical assessments (36). Despite some debates surrounding its reliability, the 'glance and grade' method remains a popular evaluation strategy in dental education (37).
A salient finding from our study, contradictory to other research, showed no significant differences between MCQ and pre-clinical practical exam performance (38-40). Previous studies have highlighted a discordance between theoretical knowledge and practical skills. This disconnect was attributed to a substantial time gap between lectures and practical sessions, which aligns with our observation (40). These divergent findings underscore the need for further exploration and validation.
Our study also scrutinized gender differences in performance, a topic eliciting mixed findings in literature (41-43). While female students outperformed males in MCQs, the reverse held true for the preclinical practical exam. Nevertheless, these differences were not statistically significant, supporting the need for synchronizing theoretical learning with practical sessions to optimize student outcomes, irrespective of gender.
Another notable aspect of our study focused on the role of stress and confidence in exam performance. Our findings suggest that stress positively correlated with performance in the preclinical practical exam, with no such relationship for the MCQ exam. Conversely, higher confidence was linked to improved MCQ performance (44). Practical exams, necessitating the application of skills and knowledge, could be more vulnerable to stress, possibly affecting motor skills and decision-making abilities (45, 46). The time-bound nature of these exams may further exacerbate stress (47).
On the other hand, confidence levels seemed to influence performance in MCQ exams more, which mainly test theoretical knowledge (48). Hence, boosting self-confidence among students could enhance their academic performance, especially in knowledge-based assessments. Dental faculties need to account for these psychological factors when developing assessment strategies. Stress mitigation techniques and confidence-building measures through structured guidance and a supportive environment may boost performance in practical and MCQ exams, respectively (49).
The results partially support the null hypothesis, revealing no significant differences between MCQs and practical performances. The lack of statistically significant difference between male and female performance in both assessments further supports the fairness of these assessment methods. Nevertheless, the influences of stress and confidence levels on exam performance led us to partially reject the hypothesis. Recognizing these influences can enable dental faculties to better tailor their assessment strategies, thereby enhancing the quality of dental education.
In this study, several limitations need to be acknowledged. First, we did not consider the methods employed for delivering information and practice to students in lectures and laboratory sessions prior to the exams. Another limitation is the use of the glance and grade method for evaluating the prepared teeth. This assessment approach has been a subject of debate concerning its efficiency and reliability. Moreover, the preclinical sessions and lectures were often conducted by different instructors, which could have influenced students' learning experiences. Ideally, instructors should be able to adapt their teaching approach to accommodate the diverse preferences and learning styles of their students (50). However, there is no evidence that all qualified teachers can effectively modify their methods to ensure successful learning for all students. This limitation may have affected the generalizability of our findings and should be taken into account when interpreting the results.

Conclusion
This study has demonstrated that the performance of second-year dental students in MCQs and preclinical practical exams is not significantly different, and there is no statistically significant difference between male and female students' performances in both assessments. These findings suggest that the assessment methods are fair and unbiased. Moreover, the relationships between stress, confidence, and performance in both types of exams revealed that stress levels affected performance in preclinical practical exams, while confidence played a more significant role in the MCQ exam performance.
Dental faculties should take these findings into consideration when designing and implementing assessment methods to evaluate students' knowledge and clinical competence. By recognizing the impact of stress and confidence levels on exam performance, faculties can develop targeted support strategies to help students manage their stress and build confidence, ultimately leading to improved learning outcomes and better prepared dental professionals. Furthermore, this study highlights the importance of using a combination of assessment methods, such as MCQs and practical exams, to provide a comprehensive evaluation of students' knowledge and skills in the field of dentistry.

Ethical considerations
All procedures adhered to the ethical standards of research conduct, with full observance of the code of ethics.
Acknowledgments
We would like to express our gratitude to Ms. Jelena Novakovic, a psychologist, for her invaluable assistance in crafting and validating the questionnaires, as well as to Dr. Abdulghani Al Arabi, a clinical statistician, for his support in the statistical analysis of our study. We acknowledge their efforts and appreciate their expertise, which greatly contributed to the quality and reliability of the data collected and the interpretation of the results.
Disclosure
The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.
Author contributions
All authors made significant contributions to the following aspects of the study: (1) idea development or design, data collection, data analysis, and interpretation, (2) manuscript preparation, critical review before submission, and editing of the revised version, and (3) final approval prior to submission to the journal.
Data availability statement
The data used in this study is available upon request from the corresponding author.




 
 
Article Type : Orginal Research | Subject: Medical Education
Received: 2023/04/2 | Accepted: 2023/08/17 | Published: 2024/04/16

References
1. Ehlinger C, Fernandez N, Strub M. Entrustable professional activities in dental education: a scoping review. British Dental Journal. 2023; 234(3): 171-6. [DOI]
2. Ma T, Ten Cate O. Entrustable professional activities: a model for job activity competency framework with microcredentials. The International Journal of Information and Learning Technology. 2023; 0108. [DOI]
3. Sanders AE, Lushington K. Effect of perceived stress on student performance in dental school. Journal of Dental Education. 2002; 66(1): 75-81. [DOI]
4. Kwon JH, Shuler CF, von Bergmann H. Professional identity formation: The key contributors and dental students’ concerns. Journal of Dental Education. 2022; 86(3): 288-97. [DOI]
5. Qutieshat. Assessment of dental clinical simulation skills: Recommendations for implementation. Journal of Dental Research and Review. 2018; 5(4): 116. [DOI]
6. Kaggal Lakshmana Rao G, P Iskandar YH, Mokhtar N. Developing consensus in identifying challenges of undergraduate orthodontic education in Malaysian public universities using e‐Delphi. European Journal of Dental Education. 2020; 24(3): 590-600. [DOI]
7. Ryan MS, Holmboe ES, Chandra S. Competency-based medical education: considering its past, present, and a post–COVID-19 era. Academic Medicine. 2022; 97(3): S90. [DOI]
8. Bird JB, Olvet DM, Willey JM, Brenner J. Patients don’t come with multiple choice options: essay-based assessment in UME. Medical Education Online. 2019; 24(1): 1649959. [DOI]
9. Kowash M, Hussein I, Al Halabi M. Evaluating the quality of multiple choice question in paediatric dentistry postgraduate examinations. Sultan Qaboos University Medical Journal. 2019; 19(2): e135. [DOI]
10. Pugh D, De Champlain A, Gierl M, Lai H, Touchie C. Using cognitive models to develop quality multiple-choice questions. Medical Teacher. 2016; 38(8): 838-43. [DOI]
11. Joorabchi B. Objective structured clinical examination in a pediatric residency program. The American Journal of Diseases of Children. 1991; 145(7): 750-4. [DOI]
12. Mondal R, Sarkar S, Nandi M, Hazra A. Comparative analysis between objective structured clinical examination (OSCE) and conventional examination (CE) as a formative evaluation tool in pediatrics in semester examination for final MBBS students. Kathmandu University Medical Journal. 2012;10(1):53-6. [DOI]
13. Martin RD, Naziruddin Z. Systematic review of student anxiety and performance during objective structured clinical examinations. Currents in Pharmacy Teaching and Learning. 2020;12(12):1491-7. [DOI]
14. Tavakol M, Sandars J. Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part I. Medical Teacher. 2014; 36(9): 746-56. [DOI]
15. Sng TJH, Yong CW, Wong RCW. Cross sectional study on the competence and confidence of dental students and graduates in the management of medically compromised patients and acute medical emergencies. PLoS One. 2023; 18(2): e0281801. [DOI]
16. Qutieshat A. Active involvement in dental education and the way to meaningful knowledge: The viewpoint of a dental educator. Journal of Dental Research and Review. 2022; 9(2): 191. [DOI]
17. Stollar F, Cerutti B, Aujesky S, Nendaz M, Galetto-Lacour A. Evaluation of a best practice approach to assess undergraduate clinical skills in Paediatrics. BMC Medical Education. 2020; 20(1): 1-9. [DOI]
18. Cardoso J, Barbosa C, Fernandes S, Silva C, Pinho A. Reducing subjectivity in the evaluation of pre‐clinical dental preparations for fixed prosthodontics using the Kavo PrepAssistant®. European Journal of Dental Education. 2006; 10(3): 149-56. [DOI]
19. Park SE, Kim A, Kristiansen J, Karimbux NY. The influence of examiner type on dental students’ OSCE scores. Journal of Dental Education. 2015; 79(1): 89-94. [DOI]
20. Delany C, Miller K, El-Ansary D, Remedios L, Hosseini A, McLeod S. Replacing stressful challenges with positive coping strategies: a resilience program for clinical placement learning. Advances in Health Sciences Education. 2015; 20: 1303-24. [DOI]
21. Bedewy D, Gabriel A. Examining perceptions of academic stress and its sources among university students: The perception of academic stress scale. Health Psychology Open. 2015; 2(2): 2055102915596714. [DOI]
22. San Diego JP, Newton TJ, Sagoo AK, Aston T-A, Banerjee A, Quinn BF, et al. Learning clinical skills using haptic vs. phantom head dental chair simulators in removal of artificial caries: cluster-randomized trials with two cohorts’ cavity preparation. Dentistry Journal. 2022; 10(11): 198. [DOI]
23. Price EO. Effect of early outdoor experience on the activity of wild and semi‐domestic deermice. Developmental psychobiology: The Journal of the International Society for Developmental Psychobiology. 1969; 2(2): 60-7. [DOI]
24. Beagrie RA, Scialdone A, Schueler M, Kraemer DC, Chotalia M, Xie SQ, et al. Complex multi-enhancer contacts captured by genome architecture mapping. Nature. 2017; 543(7646): 519-24. [DOI]
25. Yang H, Wu X. Language learning motivation and its role in learner complaint production. Sustainability. 2022; 14(17): 10770. [DOI]
26. Burkhardt M, Foster J, Laws S, Baker L, Craft S, Gandy S, et al. Oestrogen replacement therapy may improve memory functioning in the absence of APOE ε4. Journal of Alzheimer’s Disease. 2004; 6(3): 221-8. [DOI]
27. Weksler S, Rozenstein O, Haish N, Moshelion M, Walach R, Ben-Dor E. A hyperspectral-physiological phenomics system: measuring diurnal transpiration rates and diurnal reflectance. Remote Sensing. 2020; 12(9): 1493. [DOI]
28. Kim J, Yang JS. How to improve reliability of cut‐off scores in dental competency exam: A comparison of rating methods in standard setting. European Journal of Dental Education. 2020; 24(4): 734-40. [DOI]
29. Elfaki OA, Alamri AA. Evaluation of student assessment practices in a medical college. International Journal of Research in Medical Sciences. 2017; 5(11): 5048-51. [DOI]
30. Svensäter G, Rohlin M. Assessment model blending formative and summative assessments using the SOLO taxonomy. European Journal of Dental Education. 2023; 27(1): 149-57. [DOI]
31. Haladyna TM, Downing SM, Rodriguez MC. A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education. 2002; 15(3): 309-33. [DOI]
32. Palmer EJ, Devitt PG. Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC Medical Education. 2007; 7(1): 1-7. [DOI]
33. Hift RJ. Should essays and other “open-ended”-type questions retain a place in written summative assessment in clinical medicine? BMC Medical Education. 2014; 14(1): 1-18. [DOI]
34. Epstein RM. Assessment in medical education. The New England Journal of Medicine. 2007; 356(4): 387-96. [DOI]
35. Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Medical Teacher. 2011; 33(3): 206-14. [DOI]
36. Taylor CL, Grey N, Satterthwaite JD. Assessing the clinical skills of dental students: A review of the literature. Journal of Education and Learning. 2013; 2(1): 20-31. [DOI]
37. Manogue M, Kelly M, Bartakova Masaryk S, Brown G, Catalanotto F, Choo‐Soo T, et al. 2.1 Evolving methods of assessment. European Journal of Dental Education. 2002; 6: 53-66. [DOI]
38. Bazrafkan L, Shokrpour N, Torabi K. Comparison of the Assessment of Dental Students’ Laboratory Performance through MCQ and DOPS Methods. Journal of Medical Education. 2009; 13(1&2). [DOI]
39. Dennehy PC, Susarla SM, Karimbux NY. Relationship between dental students’ performance on standardized multiple‐choice examinations and OSCEs. Journal of Dental Education. 2008; 72(5): 585-92. [DOI]
40. Mhanni A, Elsawaay S. Comparison of Performance on MCQ and Preclinical Practical Assessment at the End of Two Different Fixed Prosthodontic Semesters. Khalij-Libya Journal of Dental and Medical Research. 2023: 15-22. [DOI]
41. Darmiani S, Ebrahimipour S. Comparison of Two Methods of Dental Students Assessment (MCQ and PMP) and their correlation with the total grade-point average. Journal of Dentomaxillofacial Radiology, Pathology and Surgery. 2022; 11(1): 14-8.
42. Dascalu CG, Enache AM, Mavru RB, Zegan G. Computer-based MCQ assessment for students in dental medicine–advantages and drawbacks. Procedia-Social and Behavioral Sciences. 2015; 187: 22-7. [DOI]
43. Pai AV. Factors influencing the occurrence and progress of sodium hypochlorite accident: A narrative and update review. Journal of Conservative Dentistry. 2023; 26(1): 3. [DOI]
44. Downing VR, Cooper KM, Cala JM, Gin LE, Brownell SE. Fear of negative evaluation and student anxiety in community college active-learning science courses. CBE—Life Sciences Education. 2020; 19(2): ar20. [DOI]
45. Sarid O, Anson O, Yaari A, Margalith M. Academic stress, immunological reaction, and academic performance among students of nursing and physiotherapy. Research in Nursing & Health. 2004; 27(5): 370-7. [DOI]
46. McConnell MM, Eva KW. The role of emotion in the learning and transfer of clinical skills and knowledge. Academic Medicine. 2012; 87(10): 1316-22. [DOI]
47. Zhang G, Fenderson BA, Schmidt RR, Veloski JJ. Equivalence of students' scores on timed and untimed anatomy practical examinations. Anatomical Sciences Education. 2013; 6(5): 281-5. [DOI]
48. Gottlieb M, Chan TM, Zaver F, Ellaway R. Confidence‐competence alignment and the role of self‐confidence in medical education: A conceptual review. Medical Education. 2022; 56(1): 37-47. [DOI]
49. De Vibe M, Solhaug I, Tyssen R, Friborg O, Rosenvinge JH, Sørlie T, et al. Mindfulness training for stress management: a randomised controlled study of medical and psychology students. BMC Medical Education. 2013; 13(1): 1-11. [DOI]
50. Qutieshat, Aouididi R, Salem A, Kyranides MN, Arfaoui R, Atieh M, et al. Personality, learning styles and handedness: The use of the non‐dominant hand in pre‐clinical operative dentistry training. European Journal of Dental Education. 2021; 25(2): 397-404. [DOI]

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.