Authors - Cheng Cheng, Chuanchen BI Abstract - In recent years, there has been an increase in AI - generated images. This poses a major challenge in distinguishing fabricated images from real ones. This distinction is valuable for discovering misinformation and preserving digital trust. Some deep learning models, particularly large Convolutional Neu ral Networks (CNNs), have demonstrated high accuracy on benchmark datasets like CIFAKE, but their computational requirements often in clude specialised hardware like powerful Graphics Processing Units (GPUs), which ultimately limit practical deployment. This paper explores an alternative approach that focuses on efficiency and interpretability. The CIFAKE dataset is used, but a significantly lighter CNN architecture, ResNet18 is deployed which does not require high end local GPU hardware. Furthermore, the paper applied Gradient - weighted Class Activation Mapping (Grad - CAM) not just for visu alization, but also to validate that the model learns meaningful visual features that are relevant to the classification task. This work highlights a practical method to interpret AI - generated images.