Authors - Felix Kabwe, Jackson Phiri Abstract - The growth of Open Educational Resources (OER) has created a paradox of abundance, causing “academic infoxication” where students struggle to find content aligned with their competency levels. Traditional recommender systems often fail to interpret pedagogical context effectively. This paper presents the implementation and empirical validation of OPMAS, a multi-agent architecture orchestrated with LangGraph that utilizes Large Language Models (LLMs) to automate the curation and adaptation of educational resources. Unlike linear chatbots, OPMAS employs a state-graph of specialized agents (Router, Query, Search, Adaptation) to map user queries to European competency frameworks like DigComp. The system, built using Gemini 2.5 Flash and a hybrid retrieval strategy, was validated through a Minimum Viable Product (MVP). Results demonstrate a functional success rate of 95% in complex reasoning flows and a semantic precision of 0.77. Although the deep reasoning process introduces an average latency of 96 seconds, the system successfully prioritizes pedagogical relevance and content adaptation over immediate retrieval, proving the technical viability of agentic architectures for personalized education.
Authors - Minal Deshmukh, Aakash Dabhade, Daksh Jethwa, Siddhi Jadhav, Ketki Khirsagar Abstract - In this paper, we outline the design and implementation of a novel electronic voting kiosk, dubbed BlockVote, which helps counter identity-related fraud and data tampering via biometric and blockchainbased approaches. The proposed system is a standalone embedded system running on an ESP32-S3 SoC-based microcontroller. The system includes a touchscreen display for user input and an optical fingerprint sensor for identity checking. This collected bio-data and voting selection are then integrated in such a manner that a secure transaction is created through cryptography. This is then sent through the Node.js gateway, which leads it to the secure Ethereum-based blockchain network. Such an application of physical verification technologies with blockchain technology ensures that the proposed voting system is more secure than the traditional e-voting machines or e-voting websites. Block-vote is a hybrid security system in which hardware-based verification techniques are combined with blockchain-based data management in a power-saving, compact format. The prototype has shown proof of its functional viability, its module-based construction, and its reliability, particularly in the field of embedded systems. The experimental results demonstrate the system’s high precision, low latency, and robustness against illegitimate use. The suggested framework demonstrates the practical feasibility of blockchain and biometric technology in the creation of trustworthy electronic voting systems that can be used in both urban and rural areas.
Authors - S.D.P. Abeysekara, J.A.D.N. Jayakody, K.A. Dilini T. Kulawansa Abstract - Breast cancer is the second most prevalent cancer globally and a leading cause of death among women. According to the World Health Organization, over 2.3 million new cases are diagnosed annu ally, emphasizing the need for early and accurate detection.In this work, Wavelet-Driven Intelligent Model for Multi-Class Breast Cancer Diagno sis is proposed. In this proposed work, three level wavelet decomposition is used on BreakHis data to extract wavelet based features. These fea tures were fed to Artificial Neural Network Classifiers such as Multi-Layer Perceptron (MLP), Radial Basis Function (RBF) and Machine Learning Classifier Random Forest (RF). Multi-class classification (binary , be nign sub-types, 4 malignant sub-types) of breast tumour has been done. The experimental results show that RF achieved high accuracy of 94% for benign and malignant, 97% for benign sub- type and 92% for malig nant subtype classification compared to RBF and MLP. Deep learning models such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are more effective when trained on large-scale datasets but for small datasets and limited resource environments, the proposed framework ensures efficient and consistent diagnostic approach. In future, a prototype breast cancer alert system can be developed using raspberry pie for real time application.
Authors - Md. Shahidul Islam, Atiqur Rahman, Md. Murad Hossain Abstract - This study examines the influence of both demographic and natural factors on climate change risk perception in New Zealand. Using data from a nationally representative survey, the analysis applies exploratory factor analysis to construct a composite measure of risk perception, followed by correlation and regression modeling to evaluate the relative contribution of environmental exposure and human characteristics. The findings indicate that while natural factors such as temperature anomalies and extreme weather exposure significantly shape perceived risk, demographic variables including prior disaster experience, trust in scientific institutions, and media exposure exert a stronger overall influence. These results underscore the importance of incorporating social and behavioral dimensions into climate risk assessments and policy development to enhance public engagement and adaptive capacity.
Authors - Shreyas M S, Kumar P K, Venkateswara Rao Kolli Abstract - The Newborns mostly use infant crying as their main form of communication and it represents a great variety of physiological and emotional conditions. Despite the high potential of automated infant cry analysis in early diagnosis and support of caregivers, the application in real-life still has low usage rates because of environmental noise, imbalance of classes, low interpretability, and high computational cost. This paper is a compilation of an effective, interpretable, and real-time infant cry classification system using a two-step hierarchical methodology. The first stage involves a distinction of cry and non-cry sounds to reduce the rate of false alarms due to background noise. The second stage involves categorizing detected cries into a particular intent. An adaptive feature fusion strategy based on reinforcement learning, gives the cepstral and prosodic and qualitative acoustic features dynamic weights, and SHAP-based explainability offers explicit feature interpretations. Data augmentation, SMOTE-Audio, and model pruning are used to find solutions to the issues of class imbalance, noise robustness, and deployment constraints. Experimental evidence shows that the proposed approach outperforms single feature base-lines, it is also stable in noisy environments and also attains significant parameter reduction without significant loss in performance, making it possible to run in resource-constrained devices in real time. The system is tested on a publicly available infant cry dataset which contains 889 audio samples of cry and non-cry signals in five categories of cry intent and was recorded in realistic conditions.
Authors - Md. Shahidul Islam, Md. Raihan Habib Abstract - Detecting structural breaks and anticipating volatility regimes in foreign exchange markets remain challenging due to the non-stationary and nonlinear nature of exchange rate dynamics. This study proposes a non-parametric framework for identifying structural breaks in the NZD/ USD exchange rate by integrating sliding-window volatility estimation, concentration bound based change point detection, and wavelet-based time frequency analysis. Volatility is first quantified using a movingwindow approach and compared against a Hoeffding bound to detect extraordinary events. The resulting change points are used to segment the exchange rate series into statistically reliable sequences, which are subsequently analyzed using wavelet scalograms. Empirical results reveal a consistent three-regime structure in the wavelet domain, comprising post-event reaction, stable market behavior, and pre-event escalation phases. Non-parametric statistical tests confirm significant differences in volatility distributions across these regimes, with the pre-event regime exhibiting markedly higher variability and acting as a precursor to structural breaks. The findings demonstrate that wavelet coefficients contain informative signatures of impending market instability. Overall, the proposed framework provides an interpretable and robust approach for analyzing regime-dependent volatility dynamics and offers valuable insights for early warning and risk management in currency markets.
Authors - Diego Perez-Lopez, Rodolfo Bojorque, Jorge Duenas-Lerin, Raul Lara-Cabrera Abstract - Accurate early detection of liver cancer remains a significant clinical challenge, primarily due to scarce annotated imaging data, inconsistencies in radiological interpretation, and the inherent opacity of deep learning models. To address these limitations, this study proposes a clinically informed, explainable deep learning framework designed specifically for low-annotation settings. The framework combines transfer learning with advanced visualization techniques, enabling both high diagnostic accuracy and medically meaningful outputs that integrate seamlessly into clinical workflows. Three pre-trained CNN architectures — ResNet-50, DenseNet-121, and EfficientNet-B4 — were adapted to liver cancer imaging through domain-specific fine-tuning. Model generalizability was reinforced by combining geometric data transformations with StyleGAN2-derived synthetic lesion generation. Model transparency was facilitated through Gradient-weighted Class Activation Mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP), while clinical trustworthiness was evaluated via predictive uncertainty quantification, subgroup bias analysis, and resistance to adversarial perturbations. The proposed framework was evaluated on the LiTS and TCGA-LIHC datasets, demonstrating a 15–20% improvement in accuracy over baseline models that consisted of standard convolutional neural networks trained from scratch without transfer learning or data augmentation. EfficientNet-B4 achieved 94.2% accuracy, 0.96 specificity, and an AUC-ROC of 0.978. Grad-CAM accurately highlighted tumor regions in 89.4% of cases, and Bayesian dropout identified 7.3% of predictions as uncertain. These findings demonstrate the framework’s potential for clinical deployment by balancing performance, transparency, and reliability.
Authors - Jutika Borah, Debarun Chakraborty, Bhabesh Deka, Rosy Sarmah, Siddeswara Bargur Linganna, Diptadhi Mukherjee, Ram Bilas Pachori, Mohit Khamele Abstract - Electroencephalogram (EEG) signal modeling for downstream tasks, such as classifying neurological states and identifying biomarkers, is essential for designing effective brain-computer interfaces. Conventional methods often treat EEG channels independently, overlooking inter-channel dependencies, while existing graph-based approaches address this limitation either through fixed electrode geometry or entirely data-driven connectivity. In this paper, we propose a graph representation framework that combines coherence-based spectral connectivity with domain-informed priors, such as anatomical structure and regional proximity, based on graph signal processing (GSP). The resulting representation embeds multichannel EEG signals as attributed graphs through graph convolutional networks (GCNN) to learn discriminative embeddings. Experimental results demonstrate that the hybrid framework enhances classification performance, with the proposed GCNN-deep model achieving the highest area under the receiver operating characteristic curve (AUC) across all datasets and reaching 93% on Dataset 1. These EEG datasets correspond to three independent populations and include recordings from both healthy individuals and patients with neurological disorders such as major depressive disorder (MDD) and epilepsy.
Authors - Samiksha Chougule, Kirti Satpute, Krishnraj Patil, Om Kumbhardare, Sumedha Patil Abstract - Rural communities face significant challenges in accessing essential healthcare services due to language barriers, limited health literacy, and insufficient medical support. Difficulties in understanding medical information, communicating symptoms, and interpreting diagnostic reports further restrict effective healthcare delivery. Moreover, unreliable internet connectivity limits the reach of conventional digital health platforms. This paper presents a Multilingual AI Health Assistant designed to operate on low-cost edge devices, enabling offline functionality to ensure continuous access and data privacy in low-connectivity areas. The proposed system integrates AI, ML, NLP, OCR, and speech recognition to allow users to interact in their native languages through text or voice. It analyzes user-reported symptoms to predict probable health conditions, translates complex medical reports and prescriptions into simplified, localized explanations, and provides recommendations for nearby healthcare facilities. Unlike internet-dependent telemedicine systems, this edge-based solution processes data directly on the device, safeguarding sensitive health information while maintaining reliability. By bridging linguistic and literacy gaps, the proposed assistant empowers rural populations with accessible and actionable healthcare insights, ultimately improving health outcomes in underserved regions.
Authors - Noor, Soumya Mukherjee, Shivraj Singh Yadav Abstract - The increasing numbers of deepfakes and AI tools have made it difficult to trust digital images these days. Images can be altered and ownership can be established without revealing private information. Current systems have many limitations, and systems that either rely on easyto change metadata or on cryptographic methods that are too costly like ZKSNARKs. To overcome these limitations, an authentication verification model has been presented named ZKP-Guard based on a Dual- Lock architecture framework. The detection system verifies an image is a real image by using ECDSA signatures and a custom ownership in the Schnorr-based Zero-Knowledge Proof for the protocol. This framework was tested on a dataset with significant number of images and produced desired results.