Authors - Dao Khanh Duy, Nguyen Hoang Hieu, Karn Nasritha, Khanista Namee Abstract - This research examines the effectiveness of four state-of-the art transformer-based models (LaBSE, mBERT, XLM-RoBERTa, and mT5) for cross-lingual sentiment analysis of railway passenger feedback. We focus on transferring knowledge from high-resource languages (En glish, French, Vietnamese, and Korean) to Thai, a low-resource language in this domain. To address data imbalance and scarcity, the study inves tigates transfer learning strategies ranging from zero-shot to "ultra-shot" (using only 60 labeled samples) and high-shot paradigms. Experimental results demonstrate that while generative models like mT5 perform well in zero-shot settings, the LaBSE model achieves a superior accuracy of 94.65% under high-shot fine-tuning. Notably, our proposed ultra-shot strategy enables LaBSE to reach 90.42% accuracy with minimal data, effectively bridging the performance gap without extensive annotation. These findings suggest a strategic approach for AI systems in railway op erations: rather than investing in large-scale datasets or computationally heavy models, operators can implement the ultra-shot strategy by fine tuning robust sentence-embedding models like LaBSE with a small set of gold-standard data to achieve optimal performance and cost-efficiency.