Evidence & Assessment
Why reflection needs context - and what happens when it doesn't have it
Asking students to reflect is easy. Capturing evidence of thinking as it actually develops is harder - and more valuable.
Most reflection activities in higher education are disconnected from the moment of learning. Students are asked to look back at an experience that has already receded - often days after the content was encountered, sometimes weeks. The reflective prompt arrives after the lecture, after the clinical simulation, after the case study discussion. What students produce in response may be thoughtful and genuine, but it is not anchored to the specific thinking that occurred when engagement was actually happening. That retrospective distance is the core problem.
Contrast this with what happens when evidence collection is embedded directly inside the Canvas or Brightspace page where the content lives. A student reading a case study, watching a clinical simulation recording, or working through a conceptual module encounters the response prompt at the moment of engagement. What they capture is not a reconstruction. It is the thinking that is actually happening - specific to that content, at that point in the course, before other influences have reshaped it. That difference matters not just pedagogically, but evidentially. The quality of what is captured depends entirely on when it is captured.
The versioning dimension
Embedded evidence collection makes another kind of evidence possible that retrospective reflection cannot produce: a visible development arc. Every time a student returns to a piece of embedded evidence and revises it, a new timestamped version is created. The difference between what a student wrote in Week 2 and what they write in Week 9, in response to the same prompt, is the learning made visible. That arc - captured across weeks, anchored to specific content - is extremely difficult to generate artificially. A single retrospective reflection submitted at the deadline is not. Regardless of how well it is designed, it shows a single point in time. The progressive trail shows a journey.
The scale dimension
At 500 students, retrospective reflection becomes unmanageable. Discussion boards fill with identical responses. Submission-based approaches give no insight until marking begins. Educators have no signal about where understanding is developing or where it is stalling until the deadline arrives - at which point there is little that can be done. Embedded evidence collection, by contrast, gives educators real-time visibility into which students are engaging and what their thinking looks like across the weeks that matter. That visibility allows intervention before the assessment event, not after it.
There is also an assessment integrity dimension worth addressing honestly. Embedded, timestamped, contextual evidence is structurally different from a disconnected piece of reflective writing - not because it detects anything, but because it documents authentic engagement by design. A progressive record across multiple weeks of a specific course is extremely difficult to fabricate. A single retrospective reflection submitted at the deadline is not. The difference is not in the detection mechanism. It is in the architecture of evidence collection.
Making learning visible is not a philosophical position alone. It is a structural choice about where evidence is collected and when. Reflection is genuinely valuable - the research on this is clear. But the value of reflection depends on its proximity to the learning it is meant to document. Move the moment of capture closer to the moment of engagement, and what you collect changes in kind, not just in quality.
Sean Duffy · Co-founder & CEO
See what becomes visible.
See how embedded evidence collection works inside Canvas and Brightspace. Thirty minutes with Sean, tailored to your courses.
