Authors - Jayanthi J, P.Uma Maheswari, S.Uma Maheswari, Arun Kumar, Karishma V R Abstract - The rapid migration of artificial intelligence from cloud platforms to edge-based Internet of Things environments has intensified the demand for transparent and trustworthy decision-making under severe resource constraints. While edge intelligence enables low-latency and privacy-preserving analytics, the opacity of deployed models limits user trust, accountability, and regulatory acceptance. Existing explainability techniques largely assume cloud-level resources, making them unsuitable for real-time and energy-limited edge deployments. In order to close this gap, this work develops an interpretable intelligence framework that is resource-aware and adaptable, specifically designed for limited IoT systems. The suggested approach integrates interpretability directly into the decision-making process, allowing for the generation of faithful, lightweight explanations in addition to predictions while dynamically adjusting to operational context and runtime restrictions. Further balancing local responsiveness with system- level insight aggregation and secure governance is achieved through hierarchical explanation control. Transparency, efficiency, and scalability are all in line with the framework's treatment of explainability as a fundamental system capacity. The study shows that adaptive, deployment-aware explainability can greatly improve edge intelligence's operational reliability and trustworthiness. These insights establish a foundation for building accountable and interpretable AI systems in real-world IoT environments.