Loading…
Friday April 10, 2026 12:15pm - 2:15pm GMT+07

Authors - Ei Sandar Myint, Khin Mar Soe
Abstract - Hallucination occurs when large language models (LLMs) produce information that is incorrect or not supported by facts, posing a significant challenge to the safe and reliable use of these models. Recent research on hallucination detection and prevention is summarized, and important directions for future work are identified. The need for detailed detection methods that can pinpoint exactly where errors occur, as well as techniques for handling hallucinations in long and complex responses, is emphasized. Analysis of model internal states is highlighted as a key approach to understanding the causes of hallucinations. Emerging chal lenges in multi-modal models that process both text and images are dis cussed, along with the growing focus on preventing hallucinations rather than only detecting them after generation. Additionally, the importance of addressing hallucination issues in multilingual and low-resource lan guage settings is underscored. This review aims to support the develop ment of more trustworthy and inclusive language technologies.
Paper Presenter
Friday April 10, 2026 12:15pm - 2:15pm GMT+07
Virtual Room D Bangkok, Thailand

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link