The Science Behind Assessments: How (and Why) We Build Them into Our Modules
In the field of education, assessments aren’t managed as just “tests” — they’re carefully engineered instruments grounded in measurement theory, psychology, and instructional design. In our veterinary modules, every assessment is planned to do more than check right vs. wrong. It’s meant to guide learning, provoke reflection, and align with real-world clinical thinking. The goal is to help you in supporting students’ clinical readiness.
1. Foundations: Validity, Reliability, and Evidence-Centered Design
At the heart of good assessment is validity (does it measure what we intend?) and reliability (is it consistent and replicable?). In educational measurement theory, models such as Item Response Theory (IRT) or classical test theory help us understand how individual items perform across a learner population.
Evidence-Centered Design (ECD) provides a meta-framework: we start by mapping the domain (skills, concepts, competencies), then build tasks that elicit observable or measurable evidence, define how responses will be evaluated and scored, then implement assessments. This ensures our questions aren’t random trivia, but meaningful probes into how students think. A well-designed question elicits evidence of reasoning, not just a correct label.
2. Psychology & Learning: Formative vs. Summative, and Learner Reflection
Psychology and learning science remind us that assessments are most powerful when they’re integrated into instruction, not dangling at the end. Formative assessment (assessment for learning) provides feedback during the process, helping learners adjust mid-course. Summative assessment (assessment of learning) certifies mastery after instruction.
But beyond that, good assessment design must respect cognitive load, avoid trickiness, and provide scaffolding for learners’ metacognition. In other words, assessments should encourage learners to think about their thinking. So, not just the topic - but also their reasoning that led them to their conclusion: analysis and self-analysis.
We aim for assessments that are less right/wrong and more self-reflection. As Dr. Manley puts it:
“We design our assessments so learners pause and evaluate their own reasoning, rather than scrambling to pick a correct answer.”
This shifts the mindset: the goal is growth, not judgment.
3. Why We Select Specific Questions (and Reject Others)
When choosing or writing an item, we ask:
Does it map to a learning objective or competency?
What cognitive level is required?
What distractors (wrong answers) test for common misunderstandings?
How will the response be scored?
We align with frameworks like the AAVMC’s Competency-Based Veterinary Education (CBVE) and the North American Veterinary Licensing Examination (NAVLE). By integrating both, we help ensure learners are supported not only in achieving core competencies, but also in preparing for the high-stakes exam that marks the transition to becoming a licensed DVM.
We designs our assessments throughout the modules so they feel like stepping stones — reinforcing the knowledge and reasoning skills that matter, so students are constantly supported on their journey to becoming veterinarians.
4. Instructional Design: Integration & Feedback Loops
Our instructional design uses backward design: define module goals → design assessments that reflect those goals → build instruction to prepare learners for those assessments.
Feedback loops are essential. After each assessment, we examine item performance, gather learner reflections, and iterate. Learners also receive structured feedback explaining correct reasoning, not just the right answer.
This coherence helps learners see assessment as part of learning, not a separate hurdle.
5. Why This Matters in Veterinary Education
In veterinary medicine, diagnostic reasoning, safety, and clinical judgment are critical. Assessments must do more than test rote knowledge — they must simulate decision-making, reveal thinking, and build confidence.
By designing assessments that prioritize reflection, alignment, and measurement science, we help learners develop not just correct answers — but clinical judgment, self-awareness, and the ability to grow continuously.
References
American Veterinary Medical Association (AVMA). (2023). Development and validity of an Objective Structured Clinical Examination (OSCE) for veterinary surgical skills. American Journal of Veterinary Research, 84(8), 589–599. Link
National Institutes of Health (NIH) / NCBI. Pellegrino, J.W., Chudowsky, N., & Glaser, R. (2001). Knowing What Students Know: The Science and Design of Educational Assessment. National Research Council. Washington, DC: National Academies Press. Link