Authors - Subhrajyoti Sunani, Prasant Kumar Sahu, Debalina Ghosh Abstract - Topic detection is an essential task in Natural Language Processing (NLP) that enables the automatic classification of text into predefined categories. However, research challenges in the Myanmar language remain limited due to the lack of annotated corpora and linguistic challenges. In this study, word-level segmentation is employed to capture more semantically meaningful units for topic detection, such as အနုပညာ (art), ဥပဒေ (law), အာားကစာား (sports), and နည ားပညာ (technology). The study trains and evaluates the system on a dataset of News articles categorized into 12 predefined topics: agriculture, art, crime, disaster, economy, education, foreign affairs, health, politics, religion, sports, and technology. A variety of models was examined, covering traditional machine-learning baselines, a deep learning sequence model, and transformer-based architectures. Logistic Regression and Naïve Bayes were tested and achieving accuracies of 0.73 and 0.63, respectively, with Logistic Regression outperforming Naïve Bayes as a stronger linear baseline. The LSTM model, which incorporates sequential dependencies, improves performance further with an accuracy of 0.85. Transformer based approaches deliver the best results: DistilBERT achieves 0.87 accuracy, while word level mBERT achieves 0.95 accuracy at its peak, demonstrating the effectiveness of word-level approaches for Myanmar topic detection. Overall, the findings demonstrate that while traditional models offer useful baselines, deep learning and especially transformer-based architectures provide substantial gains in accuracy and reliability for Myanmar topic detection. This research highlights the effectiveness of modern transformer-based methods for low resource language applications and sets a benchmark for future work in Myanmar NLP.