Uncertainty is a natural and inevitable part of the medical field (1). In the clinical setting, doctors frequently deal with issues that have several possible interpretations (2). Lately, there has been a focus on the necessity of acknowledging ambiguity and uncertainty in educational programs by medical schools, educators, and students (2).
After graduation, medical students must possess the professional competency of being able to accept and manage uncertainty, according to the UK's General Medical Council (GMC) (3). In a similar vein, one of the most crucial skills for physician candidates is the capacity to tolerate uncertainty, according to the Accreditation Council of Graduate Medical Education (ACGME) in the USA (4).
As medical students progress in their training, they will encounter numerous clinical uncertainties in all aspects of medical practice (5). Therefore, assessments should authentically address uncertainty to help students prepare for real-world clinical challenges (6).
But there is a question that deserves attention: Are the current assessment tools appropriate enough to support uncertainty tolerance among medical students, particularly the written assessment tools?
In undergraduate medical education, medical schools still widely use multiple-choice questions (MCQs) with a best-answer approach (A-type MCQs) (7). Despite their popularity, this examination format may inadvertently suggest to students that there is always a single correct answer, which may not align with real clinical experiences (7).
Sam et al. recently created the Clinical Prioritization Questions (CPQs), an innovative assessment instrument (8). Students must rank potential diagnoses in CPQs from most likely to least likely, based on likelihood (8). The outcomes show how well this Question format works to support students' growth in clinical reasoning abilities. CPQs also significantly contribute to cultivating students' competence in managing clinical uncertainty (8).
One of the existing challenges that educators face is the different marking system of CPQs, which, unlike A-type MCQs, learners can get a range of marks. Therefore, faculty development in terms of the nature and marking system of CPQs can play an important role in dealing with this challenge. Also, students' lack of familiarity with the CPQs is another challenge. So, it is suggested that before using CPQs, educational sessions should be held with the students concerning the format of CPQs.
Given the significance of managing uncertainty and ambiguity in their future medical careers, it is imperative to incorporate specific strategies into the curriculum to equip medical students to deal with clinical uncertainty. It is recommended that CPQs be used as a valuable formative assessment tool, due to their capability to meet this demand. Iranian medical schools may find implementing CPQs more practical and useful than other clinical reasoning assessment tools. However, before any embedment in the curriculum, pilot studies for revealing utility aspects of CPQs (e.g., validity, reliability, educational impact, acceptability, and feasibility) in Iranian settings are needed.
Article Type :
Editorial |
Subject:
Medical Education Received: 2024/07/23 | Accepted: 2024/08/10