Authors - Pierre Buys, Tevin Moodley Abstract - This paper presents a real-time chessboard state detection system that leverages computer vision and deep learning to automate a digital representation of a physical chess game. Traditional digitization systems either require manual input or specialized equipment. However, the proposed system addresses this problem by capturing a chess game in real time through the use of a smartphone camera. Detected piece positions are mapped to standard board coordinates and translated into Forsyth-Edwards Notation (FEN), enabling seamless integration with existing chess engines for analysis and move suggestions. The system works by firstly localizing the chessboard via Canny edge detection as well as a Hough transform. Thereafter, multi-class object detection is addressed by developing a two-stage R-CNN model alongside a single-stage YOLO model, allowing for a comparative evaluation of their respective methodologies and performance. The described system achieves a localization precision of 98.77% per board coordinate, whilst the two-stage R-CNN and single-stage YOLO models achieve a piece detection accuracy of 83.62% and 99.47%, respectively.