Measuring What Matters: Creating Opportunities with CBVE
Competency-Based Veterinary Education (CBVE) offers a clear framework for preparing graduates who are not only knowledgeable but clinically ready. With defined competencies and Entrustable Professional Activities (EPAs), CBVE moves veterinary training toward outcomes that matter most in practice.
For faculty, the real opportunity lies not only in teaching toward those outcomes, but in finding consistent ways to measure them.
The Challenge of Measuring CBVE
While CBVE provides structure, assessment in day-to-day teaching can feel uneven. Different specialties may use different rubrics, expectations can vary across courses, and subjective measures (like communication or reasoning) often lack shared benchmarks. The result is that even strong programs may struggle with consistency.
Rather than seeing this as a hurdle, many schools are beginning to treat it as an opportunity for standardization.
Opportunities for Standardization
Best practices in CBVE suggest that outcome measurement is most effective when it is:
Standardized – common rubrics and shared language across courses and specialties.
Frequent and formative – ongoing checkpoints that show growth, not just final performance.
Integrated – assessment tools that connect across systems, reducing redundancy.
Actionable – results that give faculty and students clear next steps.
When approached this way, measuring outcomes creates consistency across the curriculum and ensures students encounter the same expectations—no matter which specialty rotation or course they are in.
Moving Toward Methodology
What’s often missing is a methodology or toolset that makes it easier for faculty to align assessments with CBVE domains and EPAs, track progress, and share results program-wide. Developing these systems from scratch can be overwhelming—but when schools adopt a shared approach, the benefits multiply:
Faculty save time by working from common templates.
Students gain clarity on expectations and feedback.
Programs generate data that supports both curriculum improvement and accreditation reporting.
Here’s a structured approach that any program can use to bring more clarity and comparability to CBVE assessments.
1. Core Principles
Effective outcomes measurement starts with a few guiding principles that can shape every assessment effort:
Consistency – Shared rubrics and language should apply across specialties and courses.
Transparency – Students and faculty should both see how competencies map to expectations.
Actionability – Results should highlight what comes next: remediation, enrichment, or advancement.
Accreditation Alignment – Assessments should tie directly to CBVE domains and EPAs to simplify reporting.
These principles provide the backbone for any outcomes methodology.
2. Rubric Templates
Rubrics make outcomes measurable and comparable. Programs can build or adopt rubrics that:
Align with CBVE competencies like reasoning, communication, and professionalism.
Translate EPAs into observable behaviors, such as performing a physical exam or interpreting diagnostic images.
Use flexible scoring scales (e.g., novice → competent → entrustable) so faculty across specialties assess with a common standard, even in different contexts.
3. Formative + Summative Assessments
Capturing outcomes requires a balance of frequent feedback and milestone evaluations:
Formative tools—short quizzes, reflection prompts, case discussions—help track progress in real time.
Summative tools—structured evaluations, OSCEs, capstone cases—anchor progression and readiness.
Interleaved case sets that mix systems or modalities better mirror real-world reasoning and test transfer across domains.
4. Tracking and Dashboards
Collecting assessments isn’t enough—they need to be organized into usable insights:
Faculty View: See how students are performing within a single course.
Program View: Aggregate results across courses to highlight trends, strengths, and common gaps.
Student View: Provide learners with progress dashboards that clarify growth and next steps, encouraging self-directed improvement.
5. Implementation Tools
Consistency improves when programs create shared infrastructure. Consider building:
An assessment library with rubrics, quizzes, and case scenarios tagged to CBVE domains and EPAs.
Faculty workshops to practice applying rubrics and interpreting results the same way.
Reporting templates that make it easier to demonstrate CBVE alignment during accreditation reviews.
6. Putting It Into Practice
Imagine a shared rubric for “diagnostic reasoning” being used in radiology, neurology, and cardiology. Students encounter the same expectations in each course. Faculty can then compare outcomes across specialties, and program leaders can use aggregated results to identify trends—such as whether students consistently struggle with reasoning under time pressure.
Those results then flow into dashboards that support student feedback, curriculum adjustments, and accreditation reporting.
7. The Value of a Shared Methodology
Approaching CBVE measurement this way benefits everyone:
Faculty: Less guesswork, more consistency in evaluation.
Students: Clearer expectations and progress tracking.
Programs: Stronger evidence of CBVE alignment, supporting both learning improvement and accreditation.
Looking Ahead
This kind of outcomes framework can be built internally by any veterinary college. What matters most is intentional design—rubrics that align with CBVE, assessments that balance formative and summative checkpoints, and data systems that make results actionable.
For schools seeking a faster path, with templates, dashboards, and faculty support, V.E.T.S. offers ways to help put these ideas into practice.
Let’s reimagine how outcomes are measured—together.