Loading…
Friday April 10, 2026 9:30am - 11:30am GMT+07

Authors - V. Abarna, R. Shyamala
Abstract - The rapid advancement of artificial intelligence has significantly enhanced deepfake generation techniques, posing serious challenges to digital media authenticity, cybersecurity, and misinformation control. Conventional detection approaches often rely on single-modality analysis, limiting their effective-ness against sophisticated synthetic media. This paper proposes a multimodal deepfake detection framework that integrates visual, audio, textual, and behavioral biometric information using a hybrid deep learning architecture combined with a variational quantum learning approach. Deep neural models are employed for feature extraction across modalities, including convolutional networks for visual artifacts, transformer-based models for speech and text analysis, and bio-metric behavioral assessment such as eye movement, lip synchronization, and motion consistency. A hierarchical fusion mechanism aggregates modality-specific representations, while a variational quantum classifier enhances classification robustness through hybrid quantum–classical learning. An explainability module provides insight into modality contributions and prediction confidence, supported by a web-based dashboard for real-time interaction. The proposed framework aims to improve detection reliability, interpretability, and practical deployment in applications such as digital forensics, social media verification, and cybersecurity. This work presents a conceptual architecture and implementation roadmap to support future research in multimodal deepfake detection.
Paper Presenter
avatar for V. Abarna
Friday April 10, 2026 9:30am - 11:30am GMT+07
Virtual Room D Bangkok, Thailand

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link