Marking Is Pattern Recognition
Summary
- Fast Feedback originally required students to type answers, which worked well in Computing but limited use without devices.
- We now support two “no computer needed” routes: printable MCQ answer sheets and handwritten answer uploads.
- Handwritten uploads use OCR + a securely hosted GPT clean-up step, then flow into the same marking pipeline as typed answers.
- Teacher verification stays central to maintain accuracy, fairness, and trust—especially with messy handwriting.
- Uploads are secured through restricted formats, server-side processing, and strict access controls.
When I first launched Fast Feedback, the only way for written answers and multiple choice quizzes to be marked was by students entering their own answers digitally. This worked particularly well for Computing teachers. But it had obvious limitations for teachers wanting to use the platform without reliable computer access.
We’ve now got two pipelines for students to access quizzes without needing a computer. One is for multiple choice quizzes, where a teacher downloads printable answer sheets that students complete and then upload back into the platform. The other is taking a photo of a handwritten answer and uploading it — which then goes into our marking pipeline. That handwritten route is what I’m going to talk about in this post.
The handwritten answer upload pipeline is powered by OCR (Optical Character Recognition). This has extended our reach significantly and helped Fast Feedback become an even more scalable product. As soon as the answer is uploaded, it goes into our normal marking pipeline to generate a score of correct, partially correct, or incorrect, alongside WWW/EBIs, reteach reading and targeted gap tasks.
When a teacher uploads handwritten work, our OCR model first converts the image into text. That text is then refined using a GPT model that we securely host. Before any marking takes place, the teacher reviews what the system has read. If a pupil’s handwriting is difficult to decipher, edits can be made. This keeps teachers firmly in control and ensures there’s always human oversight where it matters.
Once confirmed, the response enters the exact same marking pipeline as a typed answer. It is compared against a model response, analysed against success criteria and processed through the same feedback engine and analytics system.
I’ve always thought that marking extended responses is a lot like pattern matching. A teacher is constantly comparing each student answer to a model in their head — checking for the presence of key ideas, spotting omissions, and identifying misconceptions. Doing that manually for a class set takes a huge amount of time. With Fast Feedback, that comparison layer can be done rapidly, which dramatically reduces workload while keeping the teacher’s judgement at the centre of the process.
There are trade-offs. OCR is highly accurate with clear handwriting but can introduce minor inconsistencies where writing is less legible. That is precisely why teacher verification sits at the centre of the workflow. In practice, small transcription variations rarely alter the meaning of an answer, but review ensures fairness and confidence.
Security also demanded careful design. File uploads introduce potential risks, so submissions are restricted in format, processed securely server-side and tied only to authorised classes. Handwritten responses do not bypass the main system; they move through the same protected processing pipeline as typed answers. There are no parallel routes and no unsecured shortcuts.
If AI marking only functions in ideal digital settings, it will only reduce workload in a narrow range of lessons. By enabling handwritten work to flow into a structured marking system, we made it possible for schools — regardless of device access or infrastructure — to benefit from consistent, scalable feedback.