Authors - Mazdak Zamani, Mohammad Naderi Dehkordi, Riham Hilal, Azizah Abdul Manaf, Achyut Shankar, Touraj Khodadadi Abstract - The rushed development of edge computers, including Internet-of-things (IoT) nodes, wearable similes, and embedded cyber-physical systems has enhanced the necessity to deploy machine-learning (ML) models with a high diligence to function within harsh resource restraint conditions. Although traditional deep-learning models have high predictive accuracy, they usually require significant computational resources, memory and power which makes them infeasible in these settings. This paper provides a thorough proposal of accuracy-efficiency trade-off of lightweight ML models adapted to resource-constrained resource providers. We compare classical and modern lightweight methods of determining classification: linear frameworks, tree-based learners, shallow and compressed neural networks, on various performance metrics of accuracy, inference latency, memory base, and energy usage. Experimental outcomes based on commonly used benchmark datasets show that lightweight models can achieve competitive accuracy at significantly reduced overall computation overhead. The results also provide useful recommendations to select and design ML models in edge intelligence, real-time decision-making, and low-power AI models.