Authors - Ying Tang, Chuanchen BI Abstract - This article presents a comprehensive analysis of methods and recent research in the sentiment analysis of Uzbek-language social media posts. A balanced corpus of 100,000 posts from Telegram, Instagram, Twitter, and Facebook was constructed as the object of study, in which positive, neutral, and negative classes are equally represented. The data were subjected to thorough preprocessing steps including cleaning, normalization, tokenization, removal of stop words, stemming, and lemmatization. The evaluated models include Naive Bayes, Support Vector Machines (SVM), Conditional Random Fields (CRF), Long Short-Term Memory networks (LSTM), and transformer-based architectures such as BERT and RoBERTa. The accuracy, F1-score, and runtime performance of each model were compared. Experimental results indicate that transformer-based models achieved the highest accuracy (~92%), followed by LSTM (~90%) and SVM (~88%). Despite being a simple method, Naive Bayes served as a baseline (~78% accuracy). The literature review highlights prior research conducted in Uzbek sentiment analysis, emphasizing the importance of corpus creation and accounting for language-specific features. The results indicate that transformer models provide the highest accuracy, whereas classical methods remain competitive even in low-resource settings. The article concludes with a discussion of promising directions and potential practical applications in the field of Uzbek-language sentiment analysis.