Loading…
Friday April 10, 2026 9:30am - 11:30am GMT+07

Authors - Upendra Pratap Singh, Akshay Anand
Abstract - The rapid proliferation of Internet of Things (IoT) systems has led to the widespread adoption of artificial intelligence for autonomous sensing, prediction, and decision-making across critical application domains. While these AIdriven IoT systems achieve high operational efficiency, their increasing reliance on complex and opaque models raises serious concerns regarding transparency, trust, accountability, and regulatory compliance. These concerns are particularly acute in distributed IoT environments, where decisions are made across heterogeneous devices under resource constraints. Existing explainable artificial intelligence (XAI) approaches largely focus on centralized or standalone machine learning models and fail to address the unique challenges of IoT systems, including deployment heterogeneity, dynamic data distributions, privacy requirements, and real-time decision-making. As a result, explanations are often disconnected from system behavior, lack consistency across layers, and provide limited support for trust assessment and human oversight. This paper presents a comprehensive survey of explainable AI techniques for trustworthy IoT systems and introduces a deployment-aware reference architecture that integrates explainability, trust evaluation, privacy preservation, and human-in-the-loop feedback across edge, fog, and cloud intelligence layers. The architecture emphasizes localized explanation generation, context-aware refinement, explanation validation, and multi-metric trust assessment, enabling explanations to evolve alongside system behavior. By explicitly coupling explanation quality with trust monitoring and adaptive feedback, the proposed framework bridges the gap between predictive performance and operational trustworthiness in distributed IoT environments. The survey highlights key research trends, identifies critical gaps in current methodologies, and outlines future directions for scalable, reliable, and human-centered explainable IoT systems. By positioning explainability as a core system property rather than a post-hoc add-on, this work provides a foundation for designing AI-enabled IoT systems that are transparent, accountable, and trustworthy by design.
Paper Presenter
Friday April 10, 2026 9:30am - 11:30am GMT+07
Virtual Room C Bangkok, Thailand

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link