Authors - Konstantina Rigou, George Dimitrakopoulos Abstract - The rapid adoption of Artificial Intelligence (AI) in high-impact domains (healthcare, finance, justice) creates an urgent need for sys tems that are legally compliant, explainable, ethical and transparent. Decision Support Systems (DSS) aim to assist managerial and professional decision-making, yet few works translate legal and ethical principles into concrete technical design constraints for explainable AI (XAI). This paper proposes a Legal Explainability Framework (LEF) that maps legal obligations (General Data Protection Regulation, European Union Artificial Intelligence Act) and ethical principles to measurable XAI requirements and implementation steps, and demonstrates the approach with a prototype using an open legal dataset derived from judgments of the European Court of Human Rights (ECtHR). The results show that legally compliant XAI is not merely a normative aspiration, but a technically feasible and practically implementable design paradigm.