Authors - Fernando Latorre, Ivan Becerro, Nuria Sala Abstract - The rapid expansion of interconnected networks, cloud infrastruc tures, and IoT environments has significantly increased the complexity of mod ern cyber threats, necessitating intelligent and adaptive Intrusion Detection Sys tems (IDS). While machine learning and deep learning techniques have im proved detection accuracy, their black-box nature limits transparency, interpret ability, and analyst trust in high-stakes cybersecurity environments. This lack of explainability hinders forensic validation, regulatory compliance, and resilience against adversarial manipulation. To address these challenges, this paper pre sents a comprehensive survey of Explainable Artificial Intelligence (XAI) tech niques applied to IDS and proposes a reference hybrid architecture that inte grates deep packet inspection, dual-model detection, multi-level explanation mechanisms, adversarial robustness monitoring, and governance-aware logging. The architecture combines high-performance deep learning models with inter pretable components and an explanation fusion engine to balance detection ac curacy with transparency. Furthermore, security implications such as explana tion leakage and adversarial manipulation are analyzed. The study highlights evaluation metrics, open challenges, and future research directions toward trustworthy and transparent cybersecurity systems. The findings emphasize that secure explainability is essential for next-generation IDS deployment in distrib uted and resource-constrained environments.