Loading…
Thursday April 9, 2026 12:15pm - 2:15pm GMT+07

Authors - Nevil Dhinoja, Shubh Patel, Binal Kaka
Abstract - Gradient conflicts, computational complexity, and optimization instability are some of the issues with model-agnostic meta-learning, or MAML. We introduce a methodical methodology that integrates three improvement techniques: meta-level regularization, adaptive optimization management, and taskaware gradient. By combining three complimentary mechanisms—task-aware gradient modulation, meta-level regularization, and adaptive optimization management—this work suggests an organized design framework to increase the stability and robustness of MAML-based optimization. The paradigm provides a solid basis for the methodical creation of more reliable and scalable meta-learning systems, even while empirical evaluation is saved for later research.
Paper Presenter
Thursday April 9, 2026 12:15pm - 2:15pm GMT+07
Virtual Room A Bangkok, Thailand

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link