Authors - Felix Kabwe, Jackson Phiri Abstract - The growth of Open Educational Resources (OER) has created a paradox of abundance, causing “academic infoxication” where students struggle to find content aligned with their competency levels. Traditional recommender systems often fail to interpret pedagogical context effectively. This paper presents the implementation and empirical validation of OPMAS, a multi-agent architecture orchestrated with LangGraph that utilizes Large Language Models (LLMs) to automate the curation and adaptation of educational resources. Unlike linear chatbots, OPMAS employs a state-graph of specialized agents (Router, Query, Search, Adaptation) to map user queries to European competency frameworks like DigComp. The system, built using Gemini 2.5 Flash and a hybrid retrieval strategy, was validated through a Minimum Viable Product (MVP). Results demonstrate a functional success rate of 95% in complex reasoning flows and a semantic precision of 0.77. Although the deep reasoning process introduces an average latency of 96 seconds, the system successfully prioritizes pedagogical relevance and content adaptation over immediate retrieval, proving the technical viability of agentic architectures for personalized education.