Loading…
Friday April 10, 2026 12:15pm - 2:15pm GMT+07

Authors - Hileni Ihambo, Fungai Bhunu Shava, Gabriel Tuhafeni Nhinda
Abstract - Fine-tuning large language models remains costly, and Parameter- Efficient Fine-Tuning (PEFT) techniques have emerged to make this process feasible on limited hardware. Among them, IA3 stands out for its extreme simplicity—it tunes only element-wise scaling vectors—but this design restricts the model to re-weighting features it already knows; it cannot form new ones. In this paper, we present SAMA (Spectral- Aware Minimal Adaptation), an extension of IA3 that introduces a single rank-1 update whose direction is derived from the pre-trained weights through Singular Value Decomposition. Each adapted layer gains only 4d extra parameters (3,072 for d=768), which is roughly one quarter of what LoRA requires at rank 8. We benchmark SAMA against five baselines—LoRA, DoRA, AdaLoRA, QLoRA, and IA3—across BERT, GPT-2, and Flan-T5 on twelve diverse NLP tasks under a low-resource constraint of 1,000 training samples per task. On the decoder-only GPT- 2, SAMA lowers perplexity by 26–34% relative to IA3 on both WikiText- 2 and Penn Treebank. On BERT’s RTE task, SAMA reaches 53.7% accuracy, surpassing IA3 (47.2%) and LoRA (52.6%) despite using 63% fewer trainable parameters than LoRA. We investigate this architecture dependence in detail and distil practical guidelines to help practitioners choose the right PEFT method for their setting.
Paper Presenter
Friday April 10, 2026 12:15pm - 2:15pm GMT+07
Virtual Room G Bangkok, Thailand

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link