Loading…
Saturday April 11, 2026 12:15pm - 2:15pm GMT+07

Authors - Sabo Hermawan, Ryna Parlyna, Surya Anugrah, Inkreswari Retno Hardini, Bayu Suhendry, Ria Rahma Nida, Eka Dewi Utari, Nur Lisa Rahmaningtyas, Cornellius Seno Adriano, Alifah Nur Rahmawati
Abstract - This research investigates the performance of transformer-based models, BERT, ALBERT, and RoBERTa, fine-tuned for sentiment classification on the Women’s Clothing E-Commerce Reviews dataset. The overall task was executed under both 3-class and 5-class sentiment classification schemes. Each model was trained under the same conditions and evaluated comprehensively. In the 3-class task, RoBERTa achieved an F1-score of 91.7% and an AUC of 0.967, surpassing previous best-reported results. BERT also showed competitive performance with an F1-score of 90.2% and an AUC of 0.951. These results establish the superior generalisation ability and discriminative power of transformer models, particularly RoBERTa, in classifying sentiment from unstructured review text. ALBERT, while computationally efficient, showed reduced accuracy and AUC, indicating that extensive parameter sharing can hinder fine-grained sentiment resolution. The models exhibit broadly consistent behaviour in the 5-class setting, with RoBERTa maintaining a lead. A modest decline in F1 and AUC is evident, reflecting the greater difficulty introduced by finer class granularity. This research validates transformer architectures in a commercial Natural Language Processing scenario, demonstrating the superiority of transformer-based models over traditional baselines in both accuracy and robustness.
Paper Presenter
avatar for Sabo Hermawan

Sabo Hermawan

Indonesia

Saturday April 11, 2026 12:15pm - 2:15pm GMT+07
Virtual Room A Bangkok, Thailand

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link