Authors - Maykin Warasart, Veerasith Wongkarn, Phonesavanh Nammakone, Duangtavanh Thatsaphone Abstract - Manual correction of written examination scripts is still the default practice in many institutions, but it is slow, tiring for evaluators, and not always consistent, especially when large numbers of papers must be graded in a short time. In this work we look at how recent advances in optical character recognition (OCR), machine learning (ML), and natural language processing (NLP) can be used together to support automatic evaluation of both objective and descriptive answers. In this paper We study a two–stage system: first, a handwriting recognizer based on convolutional and recurrent neural networks (CRNN) is used to read handwritten responses from scanned answer sheets; next, the recognized text is scored using semantic and syntactic similarity measures driven by transformer-based language models. By training the recognizer on a mixture of public handwriting corpora and locally collected scripts, and by combining keyword features with sentence-level embeddings, the system is able to approximate faculty grading patterns with good accuracy. This study examines the way that real tests are administered, including variations in writing styles, background noise in scans, the arrangement of answers on paper, and terms related to specific subjects. We clearly address each of those factors in our approach. Teachers won’t vanish because of this setup; instead, it aims to ease their ongoing tasks while offering fairness and consistency across student results.