Authors - Kingston Pal Thamburaj, Ramesh Mercedes Premalatha, Mukhlis Abu Bakar Abstract - Large language models are increasingly used to generate educational explanations, but hallucinations, uneven language quality, and untraceable confidence can introduce misconceptions. These risks are amplified in bilingual classrooms, where meaning must remain aligned across languages and low-resource language support is limited. This paper introduces a trust-aware multi-agent validation architecture for bilingual Tamil-Malay AI-generated educational content. The architecture decomposes validation into specialized agents that verify factual claims via evidence-grounded retrieval, assess linguistic well-formedness and terminological consistency, estimate pedagogical suitability for a target grade level, detect hallucination and bias risk, and measure cross-lingual semantic consistency to identify drift between Tamil and Malay explanations. Agent outputs are combined through a transparent aggregation mechanism to produce an overall bilingual trust score and an interpretable validation report with actionable revision cues. A benchmark construction protocol and evaluation methodology are presented to quantify claim-level correctness, cross-lingual agreement, and trust-score calibration against expert annotations. The proposed approach supports human-AI collaborative content authoring and intelligent tutoring workflows, improving the reliability and inclusiveness of bilingual education systems in Southeast Asian contexts.