Loading…
Friday April 10, 2026 3:00pm - 5:00pm GMT+07

Authors - Cheng Cheng, Chuanchen BI
Abstract - In recent years, there has been an increase in AI - generated images. This poses a major challenge in distinguishing fabricated images from real ones. This distinction is valuable for discovering misinformation and preserving digital trust. Some deep learning models, particularly large Convolutional Neu ral Networks (CNNs), have demonstrated high accuracy on benchmark datasets like CIFAKE, but their computational requirements often in clude specialised hardware like powerful Graphics Processing Units (GPUs), which ultimately limit practical deployment. This paper explores an alternative approach that focuses on efficiency and interpretability. The CIFAKE dataset is used, but a significantly lighter CNN architecture, ResNet18 is deployed which does not require high end local GPU hardware. Furthermore, the paper applied Gradient - weighted Class Activation Mapping (Grad - CAM) not just for visu alization, but also to validate that the model learns meaningful visual features that are relevant to the classification task. This work highlights a practical method to interpret AI - generated images.
Paper Presenter
avatar for Cheng Cheng

Cheng Cheng

Thailand

Friday April 10, 2026 3:00pm - 5:00pm GMT+07
Virtual Room E Bangkok, Thailand

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link