Professor, School of Computing and Electrical Engineering and Chairperson of the Centre for Human Computer Interaction, Indian Institute of Technology (IIT) , India
Authors - Hector Rafael Morano Okuno Abstract - Mechatronics is an interdisciplinary field that draws on mechanics, electronics, and computer science. In recent years, the term biomechatronic has been used with increasing frequency; it is also a multidisciplinary field that in volves biological sciences and, therefore, bioinformatics. With the development of AI, bioinformatics provides data to biomechatronic systems, enabling appli cations ranging from agriculture to medicine. This article explores how bio mechatronics and CFD simulations can help monitor a person's health status. The objectives of this research were: 1) to determine whether, using biomarkers such as hemoglobin, fibrinogen, and low-density lipoprotein (LDL), among others, and CFD simulations, it is possible to obtain blood flow velocity pro files; and 2) to investigate whether the information from CFD simulations can be used to feed a biomechatronic system to monitor a person's health condi tions. Among the results, it was found that it is necessary to have models that allow relating the main biomarkers to determine the state of health of a person, as well as with suitable sensors to measure each variable according to the orien tation of the application that is to be developed, for example, for physical train ing or for the monitoring of nutrition.
Authors - Kunam Subba Reddy, Mangavarapu Jahnavi, Kotte Hima Teja, Shaik Kathamma Basheerun, Nama Adarsh Abstract - Proper estimation of battery state of charge (SOC), state of health (SOH) and state of power (SOP) are vital to safe and efficient operation of photovoltaic (PV)-battery energy storage systems, particularly at highly dynamic profiles in which both charging and discharging is taking place. In this paper, a clear comparative analysis of classical, model-based, and machine-learning-based estimation techniques is performed in terms of a similar 24-h ultra-challenging PV + load current profile, simulating realistic residential microgrids operation. The test profile involves strong directions of current swings, partial charging, and long constant-power discharge, and temperature change. The estimators are implemented and benchmarked such as open-circuit-voltage (OCV)-based SOC estimation, linear Kalman filter (KF), extended Kalman filter (EKF), unscented Kalman filter (UKF), basic machine learning (ridge regression) and support vector machine (SVM). Each of the methods is compared to a high-fidelity model of an electro-thermal battery with capacity degradation and resistance increase with age and temperature. The accuracy of SOC tracking, SOH estimation error, and SOP tracking capability are discussed. It is found that the OCV-based method fails when the loading is dynamic which makes the SOC about constant and SOP highly conservative. KF and EKF are much better SOC trackers but they have greater deviation at high SOC and near bottom-of-discharge. ML and SVM estimators based on Ridge-regression show high SOC accuracy in the entire profile and UKF shows the best overall trade off between SOC accuracy, SOH tracking as well as SOP estimation strength when resistance varies with temperature and with age. The paper discusses the relevance of the collaborative consideration of SOC, SOH, and SOP and shows the advanced filters and ML models can significantly increase the performance of PV-battery applications that require tough operating conditions.
Authors - Asmit Patil, Smita Shedbale, Sneha Kumbhar, Ashwini Athawale, Smita Arude, Rohan S. Sapkal, Priya Sharma, Dhanaraj Jadhav Abstract - The rapid growth of Internet of Things (IoT) deployments in 5G Enhanced Ma-chine-Type Communication (eMTC) networks has significantly increased the network at-tack surface. A major challenge for Network Anomaly Detection Systems (NADS) in this environment is severe class imbalance, where dominant benign traffic obscures rare but high-impact attacks, leading to poor minority-class detection. This paper presents Conf-Gate XGBoost-RF, a two-stage hybrid anomaly detection architecture designed to address this limitation without compromising real-time performance. The framework employs a high-speed XGBoost classifier for initial screening and a confidence-gated mechanism that selectively routes low-confidence predictions to a specialist Random Forest trained on synthetically balanced data. Evaluation on the large-scale CICIoT2023 dataset shows that the proposed model achieves 99.32% accuracy and a Macro F1-score of 0.80, sub-stantially outperforming single-stage baselines. Notably, recall for critical low-volume at-tacks, such as Command Injection, improves by over 34%. With an average inference latency of 0.87 ms, the proposed approach remains compatible with the stringent low-latency requirements of 5G eMTC control signaling, demonstrating a practical balance between computational efficiency and rare-attack sensitivity.
Authors - Nguyen Thanh Minh Tam, Mai Nhu Yen, Nguyen Quang Huy, Nguyen Thi Nhung, Nguyen Thi Huyen Chau, Nguyen Hoang Phuong, Dong Van He, Bui Xuan Cuong, Vladik Kreinovich Abstract - The rapid emergence of smart apps in smart environments, industrial automation and cyber-physical systems has demonstrated the intrinsic limitations of conventional design of information and communication technologies. The existing ICT systems are very rigid, centrally controlled and dependent on fixed operation logic and this restricts their dynamism to changing environments and complex system dynamics. Intelligent systems are already urgently in need of architectural paradigms that offer self-education, decentralized intelligence, and autonomous scale-based decision making. The following paper proposes a next-generation Edge-Cloud-AI integrated ICT architecture, which is supposed to service self-learnings and autonomous intelligent systems. The proposed architecture gives a layered intelligence design that utilizes edge level real-time learning, cloud level global optimization, and an autonomy orchestration layer balancing the adaptive behavior in the distributed elements. The architecture enables systems to create operational policies in an autonomous way through the provision of continuous learning feedback and autonomous controls directly within the ICT infrastructure although still be scaled and be reliable. Significant contributions of this work are the definition of a single Edge-Cloud intelligence system, the incorporation of self-learner’s mechanisms along structural layers, and an autonomy-based orchestration model that may be applied to diverse fields of intelligent systems. The proponent architecture is not platform specific and can be applied in a wide range of future intelligent applications that may require resilience, versatility and autonomy.
Authors - Nethika Alagarathnam, Dhanushka Jayasinghe, WU Wickramaarachchi Abstract - Social media platforms, especially Twitter, have become trending sources for public health monitoring, as individuals often share personal experiences related to symptoms, diagnoses, and health concerns. However, detecting personal health mentions (PHMs) in such noisy, short text environments remains challenging. This study investigates about evaluating and comparing three neural architectures including Long Short-Term Memory with word embeddings, a fine tuned Bidirectional Encoder Representations from Transformer model (BERT) and a compact TinyBERT model distilled from BERT. Using a labeled corpus of health related tweets, all models were trained under identical preprocessing, optimization, and evaluation conditions with accuracy, precision, recall, and F1- score assessed on a test set. The results reveal clear performance differences across three architectural paradigms. LSTM baseline demonstrated strong learning on the training set but found significant overfitting and failed to perform on unseen data. But in contrast, the transformer models BERT and TinyBERT delivered a decent balanced performance reflecting the good ability to capture contextual semantics noise in tweets. While BERT achieved the highest overall performance. Notably, TinyBERT provided a competitive and alternate suite for deployment in constrained environment. These findings highlight the effectiveness of transformer architectures for Personal Health Mention detection and practical insights for building efficient and accurate public health monitoring system using social media data.
Authors - Michael David, Chekwas Ifeanyi Chikezie, Abra-ham Usman Usman, Sulieman Zubair, Henry Ohiani Ohize, Joseph Ojeniyi Abstract - With the rapid growth of digital communication and multimedia applications, secure transmission and storage of digital images have become critical challenges. Conventional text-based encryption algorithms are often inadequate for image data due to its high redundancy, strong pixel correlation, and large data volume. These characteristics necessitate specialized encryption mechanisms that can provide strong security while maintaining computational efficiency. This paper proposes a robust image encryption framework designed to ensure confidentiality, resistance to cryptographic attacks, and suitability for real-time applications. The proposed approach integrates advanced permutation–diffusion operations with chaos-based key generation to effectively disrupt statistical characteristics inherent in digital images. Chaotic maps with high sensitivity to initial conditions are employed to generate dynamic encryption keys, enhancing key space complexity and resistance against bruteforce and differential attacks. Pixel-level scrambling is combined with nonlinear diffusion operations to eliminate spatial correlations and achieve uniform cipher text distribution. The encryption process is further optimized to support grayscale and color images while preserving computational feasibility. Extensive experimental evaluation is conducted using standard benchmark images to assess security and performance. Statistical analyses, including histogram uniformity, correlation coefficients, information entropy, NPCR, and UACI metrics, demonstrate strong resistance against statistical and differential attacks. Key sensitivity and key space analysis confirm robustness against bruteforce attempts. Performance results indicate that the proposed scheme achieves a favorable balance between security strength and execution efficiency, making it suitable for real-time image protection. The experimental findings validate that the proposed image encryption framework provides enhanced security, scalability, and practicality, offering an effective solution for secure image communication in modern digital environments.
Authors - Vinod B. Maniyat, Arun Kumar B. R, Shreyas A Abstract - Stealthy rogue components pose as legitimate nodes and progressively deteriorate services, take over flows, or taint network topology, posing serious scalability and security threats to modern SDN networks. High false positive rates, poor interpretability, and static threat assumptions make it difficult for current rule-based and signature-driven detection systems to detect such contextual threats. Based on the Dynamic Containment Score (DCS), a mathematically modelled, context-sensitive metric that measures each network node’s disruptive potential, this work offers a comprehensive behavioural defence paradigm. The framework integrates graph theoretic features, protocol specific entropy, and temporal volatility to compute real time DCS rankings, refined through SHAP based explainability and confidence bounded feature attribution for adaptive detection under concept drift. A multi strategy containment engine, including deception based mitigation, redirects attackers toward synthetic vulnerabilities. Validation on hybrid real world and adversarial traffic demonstrates superior early detection, explainable risk attribution, and efficient mitigation with minimal service disruption.
Authors - Saurabh Nimje, Anup Bhitre, Sudhir Agarmore, Utkarsha Pacharaney Abstract - Insider threat is a great danger to business security because of trust rights granted insiders, it is not easy to notice their malicious or careless work through the available security programs. This paper is going to examine how Natural Language Processing (NLP) can be used to detect insider threats proactively by analyzing the communication of employees such as emails, chat messages, and internal reports. Applying the CERT Insider Threat Dataset and simulated logs, a multi-level system was created, which includes text preprocessing, feature extraction based on sentiments and semantics, and classification of machine learning models- Random Forest Mean Square Error, SVM, and LSTM. Out of them, LSTM model performed best (92.6% accuracy and overall performance) since it was able to capture contextual and sequential patterns of communication. The most notable indicators of behaviors were sentiments of negativity, passively aggressive language, and frequency of communication efficiency, which indicated a high relationship with insider threat. SHAP (Shapley Additive Explanations) was also used in the given research to allow enhancing insights into model decisions. The results prove the viability of NLP-based solutions as scalable, context-sensitive, and explainable systems to detect insider threats extending the understanding of organizations to perceive behavioral anomalies and reduce the risks to a minimum.
Authors - Maria Jihan Sangil, John Raymond Barajas, Ramnick Lim Abstract - Government Identity documents have become groundwork for citizen verification, financial inclusion and public service. However unauthorized dis-closure, fraudulent access and frequent misuse of individual's personal information expose gaps in routine verification and wrongful disbursement of welfare benefits which asks the immediate need for a more secure privacy holding approach. Existing infrastructure like DigiLocker takes care of the secure transmission but a factor of privacy exposure is still compromised. The pro-posed model introduces TrustChain based on blockchain framework for decentralized identify verification and secure access. The inducement shifts focus from submission of identity information to authentication by the means of DID (Decentralized Identifiers) ensuring that some personal information will be hid-den to the document requestor. By integrating self sovereign identity principles, distributed storage and cryptographic operations the model enables users to au-thenticate without revealing personal parameters minimizing the risks of identity compromise. Pilot findings are particular not to mention that identity expo-sure is mitigated by such a representation and offers scalability towards integration into the current infrastructures such as Aadhaar connected services and Digital locker. A safe identity space where individuals retain ownership to per-sonal information as organizations and governments achieve acceptable validation.
Authors - K. Subba Reddy, Pasulammagari Jahnavi, Devasam Hema Keerthana, Katreddy Jaswanth Reddy, Dudhekula Abdul Gaffar Abstract - The investigation of events that have taken place is the foundation of the current law enforcement that uses surveillance videos extensively to identify suspects and the motives of criminals. Wherein, it becomes slow, tiresome, and liable to error when video data is being monitored manually due to the enormously large volumes of data being monitored. To cope with this, it is a need to witness a transition toward the use of automated technologies that are led by AI and deep learning. These systems are able to detect faces, masks, gaits, and ab-normal behavior systematically in adverse environments. The survey will re-view the information about each of the methods that involve face recognition using videos, gait analysis and anomaly detection systems. In addition, efforts are put in direction to present a standardized AI-based surveillance framework of legal multimodal multitask biometric and behaviour identification. The system proposed will focus on a radical reduction of false alarms and offer adequate and prompt intelligence to the law enforcement agencies to speed up investigations.
Authors - Keerin Nopanitaya, Luo Xiaoyu, Zhu Chunping, Pratya Nuankaew Abstract - This study examines Generative AI use and ethical guidelines in graduate education at Payap University, Thailand. As large language models increasingly support learning, research, and academic writing, they boost efficiency but raise concerns about accuracy, transparency, and integrity. Using mixed methods, the study gathered questionnaire data and conducted interviews and focus groups with master’s and doctoral students. Results show broad AI use for literature reviews, writing, idea generation, and research, with more advanced use expected to grow. While students report moderate to high skills, many lack strong critical evaluation of AI outputs and practical understanding of ethics. Consistent with international research, key risks stem from limited AI literacy, unclear disclosure, and lack of oversight rather than the technology itself. The study recommends developing an AI literacy framework, clear disclosure standards, and process evaluation for ethical, responsible AI integration while protecting academic quality and integrity.
Authors - Rahul Basu Abstract - Spinodal decomposition in binary alloys produces complex, interconnected microstructures with fractal-like characteristicsduring early and intermediate stages of phase separation. This paper presents a computational framework for simulating three-dimensional (3D) spinodal decomposition using the Cahn–Hilliard phase-field model, with emphasis on fractal dimensionanalysis of the evolving microstructures. The model incorporates CALPHAD-consistent free-energy descriptions (via commontangent interpolation for miscibility gaps) for benchmark alloys such as Cu–Ni and Fe–Cr. Simulations in 3D revealinterconnected networks with fractal dimensions typically in the range 2.4–2.8 during coarsening (deviation <5\% RMSE fromFe–Cr APT data), consistent with experimental observations. Fractal analysis via box-counting ($\log(1/r)=0$–$1.2$) andcorrelation functions ($r=5$–$20$ dx) quantifies morphological complexity, providing insights into scaling behavior and self-similarity. The approach leverages efficient FFT-based solvers for large-scale 3D computations (up to 256$^3$), aligning withuseful descriptors for data-driven materials design, microstructure prediction, and alloy performance optimization. Resultshighlight the transition from early-stage fractal-like patterns to late-stage Ostwald ripening (with LS recovery on larger grids),offering quantitative metrics for alloy engineering.
Authors - S SRINIVASA REDDY,N SARASWATHI, K CHARITHA, L GOPAL KRISHNA Abstract - Accurate identification of paddy crop growth stages plays a crucial role in effective agricultural planning, crop management, and yield estimation. Paddy cultivation is highly sensitive to environmental conditions, disease progression, and growth variability, making continuous and automated monitoring essential. This paper presents an AI-driven framework for automated paddy growth stage identification and yield readiness estimation using deep convolutional neural networks. The proposed system employs the EfficientNetV2-S architecture trained on heterogeneous paddy plant image datasets collected from multiple public sources. To address inconsistencies in labeling across datasets, a semantic stage-mapping mechanism is introduced to map dataset-specific visual classes into standardized paddy growth stages. Furthermore, a confidence-weighted yield readiness index is formulated to provide an interpretable estimate of crop maturity and harvest readiness based on predicted growth stages. The trained model is deployed using a Flask-based web application that supports real-time inference, result visualization, and storage of historical predictions. Experimental results demonstrate stable convergence, high classification accuracy, and reliable generalization across different growth stages. The proposed framework effectively bridges visual growth stage classification and yield estimation, offering a practical and scalable solution for precision agriculture and decision support systems
Authors - Shilpa H. Gujar, Abhijeet B. Auti, Nisha A. Auti Abstract - It is possible to increase the acceptability of small wind turbines for wind regions with low wind velocities for rural as well as urban sectors by placing them inside diffusers. The research on development of various diffusers is a major re-search area nowadays. Curved flanged diffusers can deliver better performance by adding a cylindrical throat section between converging and diverging sections. This research paper presents a systematic study on short curved flanged diffusers with converging-diverging sections and extended uniform throat between them. Twenty-five diffuser models are studied using Computational Fluid Dynamics using ANSYS Fluent. These models are finalized using the design of experiments for six variables at five levels. The throat diameter for all diffuser models is fixed. The investigation is performed by considering radial average velocity and percentage velocity variation along the radial planes. The global velocities are observed as 1.18 to 1.47 times that of the radial average velocities. The diffuser dimensions are optimized to maximize radial average velocity and to minimize the velocity variation along the radial planes. The diffuser with optimized dimensions is manufactured and tested experimentally in a wind tunnel. Good matching is seen between the predicted results and experimental results. The optimized diffuser has the ability to produce more than two times the power that of the turbine without a diffuser.
Authors - Prathilothamai M., R. Rinitha, Priyanshu Raj, Jishnu Hari, Lucky Goyal Abstract - The rapid growth of industrialization and urbanization has intensified the release of emerging air and water pollutants, posing significant threats to environmental sustainability and public health. This paper presents an Internet of Things (IoT) driven monitoring and forecasting framework designed for the early detection of emerging contaminants in air and water systems. The proposed system integrates distributed sensor nodes for real-time measurement of key environmental parameters, including particulate matter, volatile organic compounds (VOCs), heavy metals, pH, turbidity, and dissolved oxygen. Data collected from heterogeneous IoT sensors are transmitted through secure communication proto-cols to a cloud-based analytics platform. Advanced data processing and machine learning models are employed to identify pollution patterns, predict contamination trends, and generate early warning alerts. The framework emphasizes scalability, low power consumption, and cost-effectiveness to support deployment in urban, industrial, and remote environments. Experimental evaluation demonstrates improved detection accuracy and forecasting reliability compared to conventional monitoring approaches. The proposed solution enables proactive environmental management, supports regulatory compliance, and contributes to sustainable development by facilitating timely intervention and mitigation strategies for emerging air and water pollutants.
Authors - Prerna Agarwal, Bharat Gupta, Pranav Shrivastava, Saquib Hussain, Kareena Tuli, Amaanur Rahman, Aishwarya Keshri Abstract - We propose a classification method for Ise-katagami stencil images based on SIFT keypoints and an optimal matching framework. Ise-katagami are traditional Japanese stencil papers originally developed for kimono dyeing, many of which have been preserved over long periods yet lack annotation. Because of copyright-related limitations, methods based on conventional deep learning or transfer learning―which typically depend on large labeled datasets―cannot be readily applied. To address this challenge, the proposed method formulates the classification task as an optimal matching problem over sets of SIFT keypoints, allowing robust comparison of local image structures without relying on pixellevel features. The method requires only a small number of copyrightfree training images to extract representative features for each class, thereby eliminating the need for large-scale training data and enabling fast classification. According to the experimental evaluation, our method computes a suitable decision threshold within seconds, whereas the PCAbased method demands more than 3,000 seconds for optimization, despite both achieving almost perfect classification accuracy.
Authors - V.Mohanraj, J.Senthilkumar, Y.Suresh, K.Selvaraj, B.Valaramathi, S.Sivanantham, B I Hemantt Kumar, Ishwarya P Abstract - The increаsing scаle аnd comрlexity of globаl migrаtion flows hаve creаted significаnt chаllenges for trаditionаl migrаtion mаnаgement systems, раr-ticulаrly in terms of efficiency, dаtа рrocessing, аnd timely decision-mаking. Re-cent аdvаnces in Аrtificiаl Intelligence (АI) offer new oррortunities to enhаnce migrаtion governаnce through intelligent dаtа аnаlysis, аutomаtion, аnd smаrt communicаtion systems. This рарer exаmines the role of АI in modern migrаtion mаnаgement, with а focus on border control, visа аnd аsylum рrocessing, migrаtion flow forecаsting, аnd migrаnt integrаtion services. The study emрloys а structured quаlitаtive аnd comраrаtive аnаlyticаl аррroаch, synthesizing recent аcаdemic literаture, internаtionаl рolicy documents, аnd аррlied digitаl migrаtion systems. АI аррlicаtions аre аnаlyzed within а smаrt governаnce frаmework, emрhаsizing their contribution to communicаtion efficiency, risk аssessment, аnd decision-suррort рrocesses. The findings indicаte thаt АI-bаsed biometric identificаtion, mаchine leаrning–driven risk аssessment, аnd рredictive аnаlytics significаntly imрrove the аccurаcy аnd sрeed of migrаtion-relаted рrocedures. Nаturаl lаnguаge рrocessing tools further enhаnce communicаtion between аuthorities аnd migrаnts by fаcilitаting multilinguаl informаtion аccess аnd 2 service delivery. However, the аnаlysis аlso reveаls criticаl chаllenges, including аlgorithmic biаs, dаtа рrivаcy risks, limited trаnsраrency, аnd the need for humаn oversight in high-stаkes migrаtion decisions. The рарer concludes thаt АI cаn serve аs а key enаbler of smаrt migrаtion governаnce when imрlemented аs а decision-suррort tool within ethicаl, trаnsраrent, аnd humаn-centered regulаtory frаmeworks. The study рrovides рrаcticаl insights for рolicymаkers аnd system designers seeking to integrаte АI into smаrt communicаtion аnd digitаl gov-ernаnce аrchitectures for sustаinаble migrаtion mаnаgement.
Authors - Ischyros Gangbo, Ghislain Vlavonou, Pelagie Houngue, Joel T. Hounsou, Fulvio Frati Abstract - One of the major phenomena in recent decade remains the massive proliferation of data, directly linked to the adoption and expansion of new technologies and the increasing automation of processes, affecting numerous fields such as the economy, education, and cybersecurity. This exponential increase in almost every area is accompanied by an intensification of threats. It is within this context that new approaches are being defined, as traditional security mechanisms are showing their limitations. To counter attacks, several tools, including intrusion prevention and detection systems (IDS), have been designed. IDS are devices intended to monitor an information system in order to react effectively in the event of an attack. To this end, IDS use mechanisms that allow them to listen to the system covertly in order to detect abnormal or suspicious activities and enable effective preventative action against the risks of intrusion. The objective of this article is to compare the performance of the following models: XGBoost, CNN, CNN-LSTM for multiclass classification with a hybrid model. The dataset was first transformed into a sequential format. CNN, CNN-LSTM, and XGBoost models were independently implemented as standalone classifiers to perform intrusion detection. Furthermore, a hybrid CNN-LSTM-XGBoost model was designed, where deep spatiotemporal features learned by the CNN-LSTM network were used as input to an XGBoost classifier for final decision-making. Comparative experimental results show that XGBoost and Hybrid models achieve effective detection performance, the hybrid architecture especially in detecting complex and minority attack categories.
Authors - Yavor Dankov, Boyan Bontchev, Valentina Terzieva, Elena Paunova-Hubenova, Aleksandar Dimov Abstract - The growing demand for lightweight, high-performance, and sustain-able machine structures has accelerated the adoption of intelligent digital design methodologies in modern manufacturing. Conventional CAD-based design approaches rely heavily on manual iterations, limiting efficient exploration of complex design spaces and multi-objective trade-offs. This paper presents a hybrid AI-assisted generative design and topology optimization framework for intelligent lightweight optimization of machine structural components, with ap-plication to column-type machine structures and complex non-prismatic industrial brackets. The proposed framework integrates parametric CAD modeling, finite-element-based structural analysis, CAD-embedded generative design, and an AI-inspired algorithmic decision layer for automated evaluation and ranking of design alternatives. Key performance indicators—including mass, stiffness, stress, deflection, fatigue index, and additive-manufacturing constraints—are digitally processed and combined into a composite performance score to sup-port objective design selection. In the first case study, a rectangular machine column is evaluated across multiple volume-fraction configurations, achieving approximately 20% mass reduction while retaining 96% structural stiffness with minimal increases in stress and deflection. The second case study applies generative design to a complex industrial support bracket under multiple load cases, generating twelve feasible solutions that are algorithmically ranked based on performance and manufacturability. The results confirm that AI-assisted evaluation enables efficient design space exploration and supports intelligent, sustain-ability-driven engineering decisions for advanced digital manufacturing systems.
Authors - Damla Karagozlu, Kian Jazayeri, Ahmet Adalier Abstract - The security of resource-constrained Internet of Things (IoT) devices is increasingly reliant on Zero-Trust Architecture (ZTA) models, as continuous authentication and behavioral-based trust are providing new models to help mitigate against more sophisticated threats. The proposed framework helps strengthen secure and reliable digital infrastructure for emerging smart technologies and connected environments. In developing a ZTA security framework specifically for limited re-sources (IoT), the study proposed a lightweight version that combines Elliptic Curve Cryptography (ECC)-based authentication, real- time determination of trust scores, and the use of machine learning to detect behaviorally-based attack pat-terns from a real attack dataset. In addition, the real-time analysis of device trust scores provides a means to understand which devices are performing in accordance with established expectations or displaying behavior consistent with an at-tack, and when these devices will reach those levels. Combining a lightweight ECC authentication with a (trust) behaviorally-driven approach to anomaly detection provides a means to enforce Zero-Trust by minimizing any adverse effects on computational performance ability in IoT environments. Therefore, the approach provides a practical and scalable foundation for Zero-Trust security in future IoT deployments where devices will have limited hardware resources.
Authors - Steven Saltos-Minaya, Tatiana Zambrano-Solorzano Abstract - The high rate of digital communication has heightened the possibility of fake government announcement getting into the institutions bringing about misinformation and interference in their operations. In an effort to overcome this issue, this paper will be a proposal of a blockchain verification framework that will guarantee the authenticity, integrity, and reliability of any digital notices issued by the government. The system stores cryptographic hashes of official documents in a blockchain Hyperledger, which produces an audit trail that is immutable and unalterable. The entire files of the notices are safely distributed on the InterPlanetary File System (IPFS) which is decentralized and provides scalable and permanent storage which cannot be censored. Smart contracts running on the Hyperledger platform automatically provide access control, authorization checks on authorized government publishers and a robust cryptographic assurance of authenticity and non- repudiation. The schools and institutions can check the notices in real time using an intuitive React-based frontend, with the application logic being dealt with by the Node.js/Express backend as well as communicating with the blockchain layer. Other characteristics like tracking of reputation of publishers, version management and database of instant notification are also added to advance trust and transparency. The suggested solution provides a secure, scaled-up, and highly visible channel of communication between government and educational organizations with the lowest level of system complexity and without the need of any machine-learning parts.
Authors - Jeba Priya J, N. Priya Abstract - Mental health challenges among young adults require innovative psychoeducational interventions. This study presents the development and preliminary evaluation of Dear Alfred, a serious virtual reality (VR) game designed to enhance emotional self-regulation and intergenerational empathy. Grounded in the Process Model of Emotion Regulation, the game immerses players in a narrative- driven experience addressing elderly isolation. The development followed an iterative methodology, resulting in a playable vertical slice tested on Meta Quest 2 and 3 platforms. This work contributes to the field by proposing a scalable, multidimensional approach at the intersection of psychology, technology, and education, highlighting the specific need for hardware-specific optimization in digital mental health solutions.
Authors - Ashwini V. Zadgaonkar, Sonali Potdar, Archana Bopche, Pranali Pawar, Rupali Vairagade, Yogita Hande Abstract - Time series prediction plays a critical role in monitoring and control of electrical power systems, particularly for detecting frequency fluctuations caused by imbalances between generation and demand. This study proposes an early warning framework for frequency fluctuation events using a hybrid k-Nearest Neighbour (KNN) and Dynamic Time Warping (DTW) approach combined with a global confidence interval based decision mechanism. Electricity frequency data collected from the New Zealand power grid over a six-month period were segmented into training, validation, and testing sequences. Alignment distances between historical and incoming sequences were used to identify precursor patterns indicative of impending frequency disturbances. Experimental results show that the proposed method achieves high warning accuracy with a very low false negative rate, outperforming baseline models such as ARIMA and LSTM. The findings demonstrate that KNN–DTW provides an effective and practical solution for early warning of frequency fluctuations, supporting improved operational reliability in modern power systems.
Authors - Arianna Cobb, Vishnu Kumar Abstract - Teen suicide remains a significant public health concern in the Unit ed States, with substantial geographic variation across counties. Understanding how socio-environmental and healthcare access factors relate to suicide risk can help identify communities that may benefit from targeted interventions. This study aims to support this effort by analyzing county-level teen suicide patterns using K-means clustering, an unsupervised machine learning technique. A da taset of 248 U.S. counties with reported teen suicide data was constructed using five-year aggregated suicide crude rates (2019-2023) alongside multiple socio environmental and healthcare indicators, including hospitalization rates, mental health provider availability, primary care provider rates, social association rates, uninsured population percentages, poverty levels, food insecurity, and rural population share. K-means clustering was then applied to identify county-level risk profiles. The results reveal two distinct county groups: one characterized by lower suicide rates, greater healthcare provider availability, stronger social as sociations, and lower socioeconomic disadvantage; and another characterized by higher suicide rates, reduced healthcare access, higher poverty and food in security, and greater rural residency. These findings highlight meaningful coun ty-level disparities and demonstrate the utility of machine learning approaches to identify regional risk profiles associated with teen suicide. The results may help inform public health strategies and policy efforts aimed at prioritizing re sources and expanding mental health services in high-risk communities.
Authors - Adhi Sree Praveen Pai, Alaganandha Pradeep, Jeremy Simon Moncey, Josin Kurian Athikalam, Lakshmi K.S. Abstract - This research investigates the digital footprint of mental health infor mation as it circulates on YouTube. Using a qualitative content analysis ap proach, the study examines 100 selected videos in conjunction with social media analytics to identify recurring patterns in the dissemination of mental health dis course. The findings reveal a mix of misleading or incomplete claims, educa tional resources, personal narratives, and recovery-oriented content, illustrating how mental health discussions shape and amplify user perspectives at both broad (macro) and specific (micro) levels within the evolving field of e-health. To in terpret these dynamics, the analysis applies Gibson’s theory of transactional af fordances, which illuminates key themes of risk, relevance, lived experience, credibility, and social support. By situating these themes within the broader con text of video-sharing platforms, the study underscores the importance of YouTube as a platform for mental health communication. It underscores its role in broader public conversations about health in the digital age. The future re search should investigate mental health discourse from other social media users.
Authors - Rahul Singh, Sachin B. Jadhav Abstract - Cloud cover generally limits the applicability of optical remote sensing images for tasks such as agriculture monitoring and disaster relief. Cloud removal is an inherently difficult problem because of the lack of spatial structures and spectral information. To effectively remove cloud contamination from SAR and optical images, we propose a speckle-aware global cross-attention network. The proposed SAR-optical cloud removal network architecture consists of a dual encoder with a global cross-attention mechanism that allows for effective cross-modal interactions. Additionally, a refining module and symmetric decoders improve the accuracy of the reconstructed image. Furthermore, we propose a speckle-aware gating mechanism to perform speckle filter adaptation. The experimental results affirm that our proposed network outperformed the baseline by increasing Peak Signal-to-Noise Ratio(PSNR) by +0.86 dB, Structural Similarity Index Measure(SSIM) by +0.142, and reducing the spectral distortion of the image. Additionally, we noticed a decrease in the Root Mean Square Error(RMSE) and Spectral Angle Mapper(SAM) values. This infers that selective SAR-Optical fusion with an adaptive noise-aware gating mechanism improves the accuracy of cloud-free optical images and optical remote sensing images.
Authors - Nyuti Bhesania, Khushi Solanki, Bimal Patel, Purvi Prajapati, Priyanka Patel Abstract - The rapid advancement of information and communication technology (ICT) has accelerated the digital transformation of public sector governance, including tax administration. This study examines the impact of Indonesia’s Core Tax Administration System (Coretax) on micro, small, and medium enterprise (MSME) tax compliance within an ICT–behavioral framework. Using survey data from 300 MSME taxpayers and Structural Equation Modeling–Partial Least Squares (SEM-PLS), the study analyzes the direct and indirect effects of Coretax utilization on tax compliance through administrative efficiency and trust in the tax authority. The results indicate that Coretax utilization has a positive and significant effect on administrative efficiency, trust in the tax authority, and MSME tax compliance. Administrative efficiency and trust also significantly influence compliance, con-firming their mediating roles. These findings demonstrate that digital tax administration functions not only as a technological reform but also as an institutional and behavioral mechanism that reduces compliance burdens and strengthens vol-untary compliance. From a sustainable development perspective, improved MSME tax compliance supports Sustainable Development Goal (SDG) 8 by enhancing domestic revenue mobilization for inclusive economic growth, while the integrative and trust-building role of Coretax reflects SDG 17 through strengthened partnerships among government, technology providers, and taxpayers. This study contributes empirical evidence on digital tax systems in developing economies.
Authors - Ajidhashini Thulasidass, M. Suresh Abstract - Modern railway system increasingly rely on digital technologies such as Communication-Based Train Control (CBTC), European Train Control System (ETCS) and Supervisory Control and Data Acquisition (SCADA) systems, raising significant cyber-security challenges. We have seen 220% increase in attacks over five years from opportunistic ransomware to sophisticated targeted threats. This paper provides an overview of railway cybersecurity and surveys the coverage area considering ICT architectures, cyber threat models, and AI-based defense approaches. 75% of cases employed Distributed Denial of Service (DDoS) tactics while ransomware had affected 54% of the OT environments. We describe a comparative taxonomy of Artificial Intelligence and Ma-chine Learning approaches including the methods based on supervised learning, unsupervised learning, and advanced deep learning practices with detection accuracy as high as 97.46%. However, there exist several challenges: few available public data sets, lack of validation in real-world scenarios, demands for explain ability from that AI system and worries about adversarial robustness. We discuss eight potential research gaps, and future directions focusing on federated learning, digital twin development, multimodal AI fusion and safety-security co-engineering frameworks.
Authors - Vishnu Kumar, Natalia Miranda Abstract - Food insecurity remains a pressing public health and equity challenge in urban U.S. communities, with the Supplemental Nutrition Assistance Program (SNAP) serving as the primary federal mechanism for alleviating household food hardship. Despite its importance, SNAP participation varies substantially across neighborhoods, reflecting underlying socioeconomic disparities. This study leverages neighborhood-level data from Baltimore City to identify the key socioeconomic drivers of SNAP participation using explainable machine learning (ML) techniques. Three supervised ML models: Decision Tree, Random Forest, and XGBoost were developed and evaluated using standard regression metrics. The Random Forest model demonstrated the strongest predictive performance. Model interpretability was enhanced through Shapley Additive Explanations (SHAP), which quantified the contribution of each feature to predicted SNAP participation. Results indicate that lower income, shorter life expectancy, higher Temporary Assistance for Needy Families (TANF) participation, higher proportions of female-headed households, and lower educational attainment are associated with increased SNAP reliance. These findings highlight the complex interplay be-tween economic deprivation, social vulnerability, and neighborhood-level assistance utilization, offering actionable insights for policymakers and public health practitioners. By combining predictive accuracy with interpretability, explainable ML provides a robust framework for informing evidence-based interventions aimed at reducing food insecurity and promoting equity in urban communities.
Authors - Hector Rafael Morano Okuno Abstract - This work proposes an intelligent system for automatic food-image-based recognition and calorie estimation to meet the emerging demand for accurate dietary monitoring and personalized nutrition recommendations. Conventional food-logging methods are cumbersome, prone to errors, and mostly fail to capture portion sizes, hence motivating an end-to-end computer vision and depth-based approach. The proposed system utilizes a custom-curated Indian food image dataset of eighty classes, collected, labeled, and preprocessed to make it robust enough to present various variations in lighting, background, etc. A deep learning model was then trained for detecting and classifying food with high precision. The overall classification accuracy achieved by the proposed system is ninety-seven percent. The depth understanding of the detected food regions will provide an approximation of volume and weight, leading to relatively better calorie calculations. Nutritional analysis gets integrated into the system by relating the type and estimated weight of food to the standard nutritional information for detailed insights in terms of calories, proteins, fats, car-bohydrates, fiber, and micronutrient content. The results for evaluation reveal strong detection, minimum estimation error, and efficient real-time processing, which clearly show its applications. In this paper, an approach that combines recognition by image, depth estimation by portion, and nutrition logic capable of leading to a strong solution for diet determination has been introduced.
Authors - D.Swetha, Senthilkumar Selvaraj, K.M.Madhan Prasanth, D.Nihal Abstract - The rapid expansion of digital commerce platforms has significantly transformed on- line transactional systems; however, conventional centralized architectures continue to face critical challenges related to security, transparency, data integrity, and trust management. Traditional e-commerce systems rely heavily on centralized databases, making them vulnerable to data tam- pering, unauthorized access, fraudulent transactions, and single points of failure. To address these limitations, this paper proposes a secure, scalable, and modular web-based e-commerce system that is architecturally designed for integration with blockchain technology and smart contracts. The proposed system is implemented using widely adopted web technologies, with a responsive frontend and a robust backend to support essential functionalities such as user authentication, product catalog management, shopping cart operations, order processing, inventory management, and administrative control. The architecture emphasizes separation of concerns, enabling flexibility, maintainability, and future extensibility. A key contribution of this work lies in the incorporation of a blockchain-ready framework that enables immutable transaction recording and enhanced trace- ability across the entire transaction lifecycle. Smart contracts automate transaction validation and order execution. The system also introduces an AI-based anomaly detection mechanism using a Deep Q-Network to detect fraudulent behavior. Experimental validation demonstrates reliable per- formance and scalability.
Authors - Linda Sara Mathew, Anna Irene Ditto, Anna Keerthana V, Cristal James Tomy Abstract - With proper and real-time crop mapping and yield prediction, agricultural planning, food security, and climate-resilient decisions are necessitated. The conventional field surveys are slow, expensive and inconsistent whereas the increased supply of multispectral, hyperspectral and SAR satellite imagery has made automated crop surveillance possible. Nevertheless, operational methods continue to suffer significant setbacks, such as low accuracy in the presence of a cloud cover, lack of empirical models of the complex time-dependence of temporal growth, difficulties in treating mixed pixels in the smallholder landscape, and the lack of a single framework that incorporates optical, SAR, and phenology data. Even though recent researchers have investigated deep spatio temporal models to map rice, SAR–optical fusion, mixed-pixel decomposition, temporal attention networks, multi-GPU UNet architectures, and phenology-based yield estimation, none of them have an all-encompassing, scalable framework. The study suggests a Multimodal Deep Spatio-Temporal Framework that involves multispectral alongside SAR images and phenological data, which can be used to automatically map crops and predict yields. With CNN-LSTM encoders, attention-based TCNs, adaptive mixed-pixel processing, multimodal fusion, and multi-GPU segmentation, the framework should help provide a powerful, scalable agricultural intelligence system that can be used to monitor the region and country in real-time.
Authors - Emerson Joey Caro Abstract - Detecting brain tumors or Brain Tumor Detection(BTD) from MRI scans is an essential step in the assessing of the presence and characteristics of any tumors and formulating an appropriate clinical management plan. The manual interpretation of MRI images by radiolo gists is not time-efficient as well as susceptible to mistakes, which drives the need for automated, accurate and reliable computational methods. In this study we will compare the most advanced Deep Learning (DL) ar chitectures, including traditional CNNs (VGG19, ResNet50, DenseNet), modernized CNNs inspired by transformer design (ConvNext) and Effi cientNet, to tell apart between tumor and non-tumor categories in brain MRI scans. Each model is trained and evaluated on a standardized dataset relying on measurable data such as accuracy, precision, recall, F1-score, F1 score, and confusion matrix. Our results demonstrate that modern CNN architectures such as ConvNext and EfficientNet outper form traditional CNNs, which capture both local texture, spatial patterns and the global spatial context, leading to improved context, resulting in enhanced classification performance. This benchmark is informative in evaluating the best models used in deep learning and adopt them to identify brain tumors, and in turn may be used in optimizing the use of diagnostic decision-making to improve and reducing the burden on the diagnosis.
Authors - Vasavi Ravuri, Indupriya Vempati, Sai Anuradha Kappaganthula, Pavani Muppalla, Navya Taduri Abstract - In the shadow of overlooked safety violations, different factories have lost thousands, in terms of capital as well as lives. Which is especially harrowing as these were caused due to easily preventable work accidents or easily noticeable defective machinery. Our paper dives into how artificial intelligence based methodologies, particularly, would help in mitigating these risks based on past and present research. We also recommend a potential prototype system according to the findings from the literature we reviewed, for Real-Time worker safety check and automated industrial machine quality inspection system. We have reviewed four major topics pertaining to our system: [1] Personal Protective Equipment (PPE) compliance detection through CCTV monitoring as opposed to manual monitoring, [2] industrial machine quality inspection for automatic defect identification [3] evaluation of previously used object detection models and their performance for industry applications, and [4] system level considerations for practical deployment of the said systems on a large scale. We have compared methods, deployment strategies and results from existing studies to identify key criteria like scalable architectures as well as low latency processing. We are highlighting challenges such as insufficient annotated data for rare machinery defects, good accuracy in harsh industrial conditions that might hinder detection of safety violations, and ethical issues with worker monitoring as well in this paper.
Authors - Siddalingappagouda Biradar, Vinod B Durdi, Suganthi Neelagiri, Devaraju Ramakrishna, Preeti Khanwalkar, Shashi Raj K Abstract - Phishing attacks continue to evolve in scale and sophistication, working on weaknesses across infrastructure, content, and user behavior. Earlier studies demonstrated that hybrid feature representations combining URL, HTML, and infrastructure features significantly outperform single-source approaches, with tree-based and deep learning models achieving detection accuracies exceeding 95%. However, these studies also revealed limitations related to global feature selection, cluster-agnostic learning, and evaluation protocols that may lead to optimistic performance estimates. In this paper, propose a multi-cluster phishing detection framework that organizes features into three complementary clusters: Cluster 0 (C0) for infrastructure and transport-layer characteristics, Cluster 1 (C1) for URL and HTML content features, and Cluster 2 (C2) for behavioral and campaign-level patterns. To address the limitations of traditional feature selection methods, we introduce HC²FS (Heuristic-Constrained Class-Conditional Feature Selection), a cluster-aware and class-conditional approach that preserves low-variance yet highly discriminative phishing indicators. The proposed system is evaluated on large-scale datasets comprising over 600 combined features, using a strict 80% training and 20% testing split enforced prior to feature selection and model training.
Authors - Koutaro HACHIYA, Ioannis PATIAS Abstract - Inference latency remains a critical bottleneck in deploying large language models, for real-time and resource-constrained environments. Prior work has proposed latency formulations that express latency as a function of key parameters. However, they often assume a linear dependence on sequence length, which fails to generalize to tasks involving significantly longer sequences, such as document-level language modeling, long-context retrieval, or time-series forecasting, where latency scales nonlinearly and unpredictably. This paper addresses the limitations of existing latency formulations by proposing three complementary enhancements to improve generalization across varying sequence lengths. First, we introduce a nonlinear term for sequence length, capturing the superlinear growth in latency observed in transformer-based architectures due to quadratic attention mechanisms and memory overhead. Second, we propose a sequence-length-dependent scaling factor for the sequence length parameter itself, allowing the model to adaptively adjust its sensitivity based on empirical latency profiles across different tasks and hardware configurations. Third, we incorporate an empirical correction term enabling calibration of the latency model to account for hardware-specific and implementation-level nuances. By explicitly modeling the nonlinear and context-sensitive behavior of sequence length, our approach offers a more faithful representation of latency dynamics. This work lays the foundation for more adaptive and hardware-aware latency estimation frameworks, with implications for model deployment, scheduling, and cost optimization in production systems. We conclude by discussing future directions for integrating dynamic profiling and reinforcement learning to further refine latency predictions in evolving runtime environments.
Authors - Joao Paulo Sousa, Tiago Lopes, Tatiana Ferreira, Tatiana Batista, Pedro Malheiro, Joao Vitorino, Barbara Barroso, Carlos Costa Abstract - Medical hyperspectral imaging (MHSI) represents a burgeoning paradigm in diagnostic visualization, capable of capturing contiguous spectral signatures across hundreds of narrow wavelengths to delineate pathological structures invisible to the human eye. Despite its diagnostic richness, the advancement of deep learning models in the MHSI domain is severely constrained by two primary challenges: the extreme scarcity of high-quality, pixel-level annotated datasets and the overwhelming data redundancy inherent in high-dimensional hypercubes. Traditional self-supervised methods, particularly masked image modeling, often fail to prioritize discriminative tissue signatures, while domain-agnostic transfer learning from natural images proves inappropriate due to structural and feature-level incongruities. This paper introduces a novel high-quality research methodology: Reinforced Spatio-Spectral In-Context Learning (RSS-ICL). This framework integrates an asynchronous advantage actor-critic (A3C) reinforcement learning agent with visual in-context learning (ICL). The proposed model employs the RL agent to dynamically learn adaptive masking strategies that prioritize high-entropy, "hardto- reconstruct" spatio-spectral voxels, thereby forcing the backbone architecture to capture intricate biochemical signatures during pre-training. By reformulating segmentation as a supportquery inpainting task, RSS-ICL facilitates universal medical segmentation, allowing the model to adapt to novel clinical tasks and unseen tissue types in a zero-shot or one-shot manner. Theoretical arguments suggest that this synergistic approach effectively bridges the gap between low-level signal recovery and high-level semantic understanding in hyperspectral analysis. Through rigorous methodological development and empirical support from existing selfsupervised benchmarks, this paper outlines a path for accelerating the deployment of interpretable, annotation-efficient clinical AI.
Authors - Sushmita Sarkar, Sumit Kumar Debnath Abstract - Multi-angle image synthesis is highly important when it comes to the generation of 3D scenes. But the current methods are either ex pensive in terms of computational costs or lack photorealism in their outputs. We propose a novel sketch and text based multiview image generation approach that solves the above-mentioned problems by mak ing use of multimodal diffusion models efficiently. Our pipeline utilises DreamShaper v8 for converting the input sketch and text into a pho torealistic 2D image and then passes this 2D image into a fine-tuned Zero123plus model for the final generation of consistent multiview im ages, showing a 43.69% improvement in the overall perceptual quality compared to baseline sketch-to-multiview models. Moreover, our pipeline shows flexibility in scalability by generating anywhere from 6 to 64 consis tent multiview images according to the requirements of the downstream tasks. We demonstrate the success of our pipeline through extensive ex periments conducted using voxel-based grid approaches and Neural Ra diance Fields (NeRF). Our pipeline greatly reduces computational costs, all while maintaining photorealism in the outputs, confirming the poten tial of sketch and text based multimodal conditioning as an intuitive and efficient paradigm for controlled 3D content generation.
Authors - Carl Kugblenu, Petri Vuorimaa Abstract - Compressed-domain audio steganography poses a critical foren sic challenge in modern VoIP systems, particularly within low-bitrate codecs. Traditional deep learning models often lack interpretability and struggle with low embedding rates. This paper introduces AUSPEX, a lightweight forensic framework ( 170k parameters) optimized for uni versal compressed audio steganalysis. A novel three-channel tensoriza tion strategy is proposed; incorporating raw bits, temporal derivatives, and bit stability to amplify subtle embedding perturbations. A non trainable high-pass residual stream further enhances sensitivity to first and second-order temporal noise. To ensure forensic transparency, a dual level explainability framework integrates intrinsic spatial attention with post-hoc Integrated Gradients, providing bit-level evidence attribution. Experiments demonstrate detection across CNV and PMS algorithms at low embedding rates. AUSPEX advances the field by unifying ef f icient, edge-deployable detection with rigorous human-centric forensic interpretability.
Authors - Nitika Gawande, Pradnya Bapat, Sanyukta Sasane, Trupti Bankar, Rakhi Dongaonkar, Rashmi Apte, Mangesh Bedekar Abstract - The abstract of the study emphasizes the thorough discussion of cussword usage in Hollywood films over a period of thirty five years, from 1990 to 2025, particularly in genres such as Action, Comedies, and Romances. On the basis of a carefully selected dataset of cusswords from Kaggle along with a considerable subtitle file dataset (.srt), the results have been obtained to determine whether profanity has been used over the years with an appropriate level of intensity in the respective genres of films.
Authors - Kostiantyn Hrishchenko, Oleksii Pysarchuk Abstract - Flexible Job Shop Scheduling Problems (FJSP) involve large discrete decision spaces and strict feasibility constraints, making them challenging for deep reinforcement learning methods. In this work, we study how state represen tation and feature extraction architecture influence the performance of action masked Proximal Policy Optimization (PPO) in flexible scheduling. The scheduling task is formulated as a sequential assignment of operations to machines with a fixed discrete action space, where infeasible actions are removed using a feasibility mask. The environment state is represented using three heter ogeneous feature blocks describing resource availability, operation readiness, and time-related attributes of assignment alternatives. We compare a baseline single-branch encoder with a multi-branch feature extraction architecture that processes these blocks separately before aggregation. Experiments were conducted on the Brandimarte MK benchmark suite (MK01 MK10). Under identical training conditions, the multi-branch representation achieved lower makespan on 9 out of 10 instances, with relative improvements ranging from 2.4% to 27.8% compared to the single-branch baseline. The largest reductions were observed on MK06 (−27.8%) and MK10 (−25.2%), while per formance remained comparable on MK08. Training results indicate improved stability and more consistent convergence for structured representations. These results demonstrate that structured state design and feature extraction ar chitecture are critical factors in action-masked reinforcement learning for flexible job shop scheduling.
Authors - Stuti Kumari, Kunal Dey Abstract - Teen suicide remains a significant public health concern in the Unit ed States, with substantial geographic variation across counties. Understanding how socio-environmental and healthcare access factors relate to suicide risk can help identify communities that may benefit from targeted interventions. This study aims to support this effort by analyzing county-level teen suicide patterns using K-means clustering, an unsupervised machine learning technique. A da taset of 248 U.S. counties with reported teen suicide data was constructed using five-year aggregated suicide crude rates (2019-2023) alongside multiple socio environmental and healthcare indicators, including hospitalization rates, mental health provider availability, primary care provider rates, social association rates, uninsured population percentages, poverty levels, food insecurity, and rural population share. K-means clustering was then applied to identify county-level risk profiles. The results reveal two distinct county groups: one characterized by lower suicide rates, greater healthcare provider availability, stronger social as sociations, and lower socioeconomic disadvantage; and another characterized by higher suicide rates, reduced healthcare access, higher poverty and food in security, and greater rural residency. These findings highlight meaningful coun ty-level disparities and demonstrate the utility of machine learning approaches to identify regional risk profiles associated with teen suicide. The results may help inform public health strategies and policy efforts aimed at prioritizing re sources and expanding mental health services in high-risk communities.
Authors - Sanjida Karim Peuly, Sharmin Alam Mou, Tamanna Hossain Badhon Abstract - Diabetes diagnosis at the early stages is an important factor in avoiding long-term complications. The existing body of literature tends to be based on small, saturated datasets that are not very interpretable and externalized. This pa-per will suggest a powerful machine learning model to predict diseases at the first stage of diabetes on the basis of a symptom-based dataset of One thousand five hundred and sixty cases. Six classifiers, including Logistic Regression, Decision Tree, Random Forest, K-Nearest Neighbors, Naive Bayes, and XGBoost, were considered on the stratified cross-validation and independent test sets. Systematic hyperparameter optimization using GridSearchCV was used to prevent overfit-ting and improve the generalization. Additionally, a Stacking Ensemble model was provided; the Logistic Regression, Random Forest, and XGBoost were com-bined to obtain a high level of predictive stability. Experimental evidence has shown that ensemble-based methods are more effective than single classifiers, as XGBoost and Stacking Ensemble have the highest accuracy and ROC-AUC val-ues. The analysis of feature importance suggested polyuria and polydipsia as the most important clinical signs, which is consistent with medical knowledge. This study offers a practical and interpretable decision support model in screening early diabetes, which bridges the predictive performance and clinical utility gap.
Authors - Jose R. Rosas-Bustos, Mark Pecen, Jesse Van Griensven The, Roydon Andrew Fraser, Nadeem Said, Sebastian Ratto Valderrama, Andy Thanos Abstract - Post-quantum migration is increasingly constrained by time: deployed cryptographic mechanisms may need to be retired, hybridized, or re-keyed before effective security margins fall below asset-specific pol icy thresholds. This timing problem is complicated by uncertainty in clas sical hardware acceleration, algorithmic progress, implementation ero sion, and the arrival of cryptographically relevant quantum comput ers. This paper presents a compact probabilistic pipeline that translates evolving assumptions and evidence into decision-facing migration guid ance. The approach couples three layers: (i) a security-trajectory model that encodes expected margin erosion under scenario parameters, (ii) a latent-regime model that represents partially observed risk states and updates them as evidence changes, and (iii) an option-style timing layer that quantifies the diminishing value of delaying migration as thresholds approach. Outputs are conditional on stated assumptions and are in tended to be reported with sensitivity bands and lead-time constraints. In practice, the pipeline is intended to be re-run as assumptions and evidence evolve, preserving an auditable trail from scenario inputs to in termediate states and final decision artifacts. The primary deliverables are comparative rankings and conservative “start-by” windows under stated assumptions, rather than single predicted break dates.
Nadeem Said is a computer engineer with research and professional interests in artificial intelligence, machine learning, cryptography, and secure computational systems. Currently pursuing his Master’s, his academic work includes peer-reviewed contributions to quantum security... Read More →
Authors - Ronald S. Cordova, Rowena O. Sibayan, Hazel C. Tagalog, Rolou Lyn R. Maata Abstract - Awareness regarding consumer sentiments will benefit a business entity and/or a company in making their marketing strategies more effective and engaging in the current digital marketing context. In traditional marketing scenarios, since there is a lack of actual emotional aspect in expressing views in real-time contexts, it has always been challenging for a business to perform a significant adjustment in their marketing campaigns and achieve a greater success rate. The proposed idea focuses on AI and ML-based approaches for sentiment analysis in digital marketing. The framework is made up of seven core steps: data collection, preprocessing and data cleaning, sentiment analysis models, feature extraction and model training, sentiment classification and analysis, insights and decision-making, and application in digital marketing. From social media to e-commerce reviews to online discussions, consumer sentiment data comes from many digital sources. The text for analysis is standardized, and noise is cleaned in data preparation. Then, apart from other artificial intelligence-based sentiment classification models, sentiments are classified as positive, negative, or neutral using lexicon-based, machine learning, and deep learning approaches. The learned knowledge enables businesses to react dynamically to consumer sentiment, target advertisements, and adjust marketing strategies. Businesses will be able to conduct more profitable promotions, communicate with customers better, and monitor real-time sentiment through this AI-driven sentiment analysis platform. The paper emphasizes the benefit of incorporating artificial intelligence in decision-making within digital marketing, even in addressing issues like ambiguous sentiment expression management and multi-language data. This paper provides a strategic way towards maximum customer interaction and brand loyalty and also emphasizes the need for sentiment analysis that is sustained by available data in modern digital marketing.
Authors - Mandala Nagarjuna Naidu, Bandi Hemalatha, Kadavakallu Viswanath, Kotapati Venkata Pavan, Ms.Ragavarthini Abstract - Autonomous vehicles rely on powerful perception systems with real-time object detection and tracking capabilities. Our paper presents a unified deep learning framework based on YOLOv8n and ByteTrack for multi-class detection of vehicles, pedestrians, traffic signs and lights on roads. Our work maintains consistent tracking between frames without the limitations of previous works that rely on static images or single-object-type detection. The lightweight model, with only 3.2 million parameters in YOLOv8n, provides a good trade-off between accuracy and efficiency for embedded automotive hardware. Experiments conducted on the COCO validation dataset, achieving 52.11% mAP @ 0.5,with precision and recall values of 63.42% and 47.44% respectively.It runs real-time on traffic videos with an average frame rate of 62 FPS and a mean inference time of 10.10 ms.Results for tests on traffic videos show, on average 10.15 objects detected with 68.29% confidence.These findings make this approach apt for both autonomous navigation and intelligent traffic monitoring.
Authors - Mazdak Zamani, Mohammad Naderi Dehkordi, Riham Hilal, Azizah Abdul Manaf, Achyut Shankar, Touraj Khodadadi Abstract - Access to formal financial services remains limited in many develop ing regions, largely due to economic and infrastructural constraints. This study uses the ISO/IEC 25010 as the evaluation framework to present a software quality assessment of a lending automation system installed in a financial insti tution in Butuan City, Philippines. The evaluation focuses on five essential as pects of software quality: usability, reliability, functional suitability, perfor mance efficiency, and security. Usability surveys using SUS and UMUX-Lite, operational and performance testing, and an evaluation of security and data pri vacy compliance were used to gather empirical data. According to the results, the system achieved high performance with an average inference latency of 0.208 ms per record, uptime reliability of ≥99.5%, excellent usability with a mean SUS score of 82.5, and full compliance with data privacy regulations. Predictive analytics, specifically the Random Forest model with isotonic cali bration, further enhanced the automated loan assessment’s interpretability and reliability. The system proved that it is appropriate for real-world applications and can encourage financial inclusion in resource-constrained environments, as it exceeded the intended benchmarks for each quality model. To guarantee the long-term adoption of lending automation technologies, the study emphasizes the significance of thorough software quality evaluation in addition to predic tive accuracy.
Authors - Sai Sundarakrishna, Vedant Maheshwari Abstract - Recent literature has posed LLMs as nonlinear dynamical systems. LLM safety, in these modern LLMs is about the systematic and critical monitoring of logit based oscillations, hidden state rotations and entropy fluctuations. Many of these important factors are spectral proxies for the generation of imaginary eigenvalues. These imaginary eigenvalues are, in a way, determinants of the latent oscillation energy. Though the system in its original state space is inherently nonlinear, through the Koopman operator, we can linearize the evolution in the lifted space of observables. We design a spectral jailbreak detector that has a Sparsely regularized koopman autoencoder as its backbone. We obtain the koopman operator through this SR-KAE, and also obtain the imaginary component of the eigenvalues of that spectral operator, A new risk score metric is proposed that is used to classify prompts as either jailbreak or safe. This becomes a physics-style stability classifier on prompts. We present several test cases, while we discuss the strengths and limitations of this new system.
Authors - Mazdak Zamani, Mohammad Naderi Dehkordi, Riham Hilal, Azizah Abdul Manaf, Achyut Shankar, Touraj Khodadadi Abstract - The rushed development of edge computers, including Internet-of-things (IoT) nodes, wearable similes, and embedded cyber-physical systems has enhanced the necessity to deploy machine-learning (ML) models with a high diligence to function within harsh resource restraint conditions. Although traditional deep-learning models have high predictive accuracy, they usually require significant computational resources, memory and power which makes them infeasible in these settings. This paper provides a thorough proposal of accuracy-efficiency trade-off of lightweight ML models adapted to resource-constrained resource providers. We compare classical and modern lightweight methods of determining classification: linear frameworks, tree-based learners, shallow and compressed neural networks, on various performance metrics of accuracy, inference latency, memory base, and energy usage. Experimental outcomes based on commonly used benchmark datasets show that lightweight models can achieve competitive accuracy at significantly reduced overall computation overhead. The results also provide useful recommendations to select and design ML models in edge intelligence, real-time decision-making, and low-power AI models.
Authors - Sri Kavya Swarna, Varun Kumar Reddy Kola, DS Bhupal Naik, Dinesh Reddy Tiyyagura, Lakshmi Charitha Bandaru, Srinivasa Rao P. Abstract - This paper presents PricePulse, a web-based price comparison system that supports consumers with real-time multi-platform price analysis and AI-powered shopping insights. The system aggregates product data from Amazon, Flipkart, and Meesho via SerpAPI’s Google Shopping API and enriches results with recommendations generated by Google’s Gemini AI. Built on Next.js and Flask, PricePulse addresses gaps in the e-commerce ecosystem by eliminating manual price comparison across platforms. The system uses JWT-based authentication, maintains search history in SQLite, and provides an intuitive interface with React and Tailwind CSS. Evaluation shows average response times under 2 seconds and 95% accuracy in price extraction, demonstrating significant potential to help consumers make informed purchasing decisions and save on purchases.
Authors - Mazdak Zamani, Mohammad Naderi Dehkordi, Riham Hilal, Azizah Abdul Manaf, Achyut Shankar, Touraj Khodadadi Abstract - Nowadays, small networks are commonly used by people at home, in laboratories, or by small offices. These networks are not secured and an attacker can easily attempt to intrude these networks. To prevent this we need to continue to monitor the network and detect wrong activity early. Our simple system is called NetSentinels, and was developed in this project. It monitors the network traffic at all times and displays alerts message in case of a questionable event. We have used Snort which is free and open source tool. It assists in identifying attacks such as port scans, ICMP floods and multiple attempts of logging in. This system does not require the use of sophisticated devices thus can be installed in ordinary computers. NetSentinels can be applied in small networks to remain safe against attackers and enhance general security practices. In addition to real-time monitoring, the system also stores alert logs which can be used for later analysis and understanding attack patterns. The use of a virtual machine environment ensures safe deployment and easy portability across different systems. The system is designed to consume minimal CPU and memory, making it suitable for continuous operation without affecting system performance. Overall, NetSentinels provides a simple, low- cost and practical approach for improving network visibility and security awareness in small-scale environments.
Authors - Radha Gawande, Supriya Nara Abstract - Complicated nature of the intensive care unit (ICU), immediate and accurate decision-making is vital to the survival of the patient. The problems that healthcare providers are struggling with are the overload of information, slowness of the decision making process, and the human factor due to growing amount of various patient information. Recent development in artificial intelligence (AI) offers promising solutions since they facilitate effective analysis of data, pattern detection and predictive modelling. This changes the provision of critical care. In this paper, the changing application of AI in ICUs is discussed. It talks about its usage, merits and demerits, as well as technological basis. It also discusses AI methods such as machine learning (ML), deep learning (DL), natural language process (NLP), and expert system, predictive analytics, early sepsis detection, clinical decision support system, automated monitoring and insight-based treatments by documentation fueled by natural language processing, are but a few of the practical methods of applying AI. The advantages of automation and robotics to enhance productivity and patient care are also discussed, which are AI-based medication delivery system and robotics helper. Nonetheless, a number of challenges to implement AI in critical care units are a lack of consensus, algorithm bias, understanding model decisions, and various data, personalized AI-driven care in the ICU, integration of edge computing and internet of medical things (IoMT), reinforcement learning in adapting patient management are some of the future prospects[1].
Authors - Priyanka Patel, Ashvi Padshala, Moxa Patel Abstract - This paper surveys recent advances in the application of data analysis, machine learn ing, artificial intelligence, and big data techniques for climate pattern detection. It covers sources of climate data, analytical methods, computational architectures, key challenges, and emerging trends. The focus is on identifying how integrated data-driven methods enhance the understanding, prediction, and interpretability of climate phenomena.
Authors - Rohan Dafare, Supriya Narad Abstract - The quick spread of big data and the rising need for instant analytics have shown the built-in limits of old-school relational database management systems (RDBMS). NoSQL ("Not SQL") databases give schema-less design, side-to-side growth, and adaptable data shaping making them a better fit for handling messy and semi-messy data on a big scale. This paper looks at the edge NoSQL has over SQL systems by checking out key traits like how flexible the data model is how well it works under high output how easy it is to grow sideways, and how well it fits with cloud-native setups. Using a careful review of NoSQL teaching and use, we boil down real-world findings and suggest ways to pick the right database tech based on what the app needs. Our talk ends with a plan to help pros and teachers get when and why to use NoSQL fixes instead of, or along with classic SQL databases. Modern data intensive workloads driven by real time analytics, large scale user interactions, IoT streams, and unstructured content. It demands storage system capable of delivering high throughput, scalability and flexible data models. Traditional SQL databases continue to offer strong consistency, ACID guarantees and structured schema support, making them ideal for transactional applications and environments requiring strict data integrating. However, as data volume, variety and velocity increase, NOSQL databases have emerged as powerful alternative, providing horizontal scalability, schema-less design and optimized performance for distributed and semi-structured data processing.
Authors - Anshuman Prajapati, Madhav Desai, Priyanka Patel Abstract - Analysis of facial skin conditions is essential for both dermatological and cosmetic evaluation; however, inter-class similarity and localized texture variations make multi-label classification of characteristics like wrinkles, dark circles, enlarged pores, hyperpigmentation, pimples, and fine lines difficult. The effectiveness of transfer learning for this task is examined in this paper, and an attention-enhanced framework based on EfficientNet-B0 is proposed. In order to highlight the importance of pre-trained feature representations, we first assess a bespoke convolutional neural network (CNN) as a baseline. Using the Convolu tional Block Attention Module (CBAM), which combines channel and spatial attention processes to enhance discriminative feature localization while maintain ing computational efficiency, we build upon this by using EfficientNet-B0 as the transfer learning backbone. According to experimental data, our CBAM augmented EfficientNet achieves better class-balanced performance in macro-F1 score than both the baseline EfficientNet and the bespoke CNN. Consistent in creases are confirmed by per-class analysis and confusion matrices, even for dif ficult settings. Additionally, Grad-CAM visualizations show that by concentrat ing activation on pertinent facial regions, the attention mechanism improves in terpretability. These results imply that a promising avenue for multi-label derma tological image analysis is attention-guided transfer learning.
Professor, School of Computing and Electrical Engineering and Chairperson of the Centre for Human Computer Interaction, Indian Institute of Technology (IIT) , India
HOD, Department of Computer Application & Assistant Professor - Department of Computer Engineering, B.H.Gardi College of Engineering & Technology, Gujarat, India
Authors - K.Surya Teja, Immanuel Anupalli, P.Sudheer Abstract - Maximum power point tracking (MPPT) is a vital module of photovoltaic (PV) systems. Traditional maximum power MPPT techniques struggle in a complex and ever-changing scenarios, and the solar system's output characteristic curve shows multi-peak phenomena owing to dissimilarities in temperature and light concentration. This paper proposes an adaptive hybrid RIME optimization technique which enhances the exploratory capabilities of the method during the initialization phase by integrating tent mapping. The goal is to improve feature selection tasks and MPPT for PV systems under partial shading condition. It uses piecewise mapping to optimize the algorithm's parameters and attack a fair steadiness amongst global exploration and local exploitation. The search method is dynamically adjusted with an adaptive inertia weight introduced, which further increases convergence speed, search efficiency and algorithm's adaptability. In order to reduce computational costs and increase classification accuracy, the hybrid method employs natural-inspired metaheuristics for feature selection, resulting in optimal subsets. When it comes to tracking speed, precision, and stability in the PV MPPT environment, the method beats PSO-BOA, conventional RIME, IRIME and HRIME approaches.
Authors - Gauree Prabhakar Sayam, Supriya Narad Abstract - Chronic non-communicable diseases like diabetes, heart disease, and obesity continue to increase globally, comprising 74% of all deaths, even as noted by the World Health Organization in the 2025 progress monitor on non-communicable diseases. This work describes the design and deployment of Health Risk Advisor, an AI (artificial intelligence) web application powered by machine learning that predicts early risks and provides personalized recommendations on disease prevention. The integration of ensemble models such as Random Forest and XG-Boost into a rule-based advisory engine allows the application to achieve more than 90% accuracy in making risk classifications, addressing access barriers to healthcare in underserved regions, such as rural India. From architecture and design, healthcare applications and benefits, to ethical AI challenges and considerations, this work discusses every aspect of the new technology using diverse sets of datasets that inform practices as well as recommend ethical AI. Evaluations showed reductions of the burden from NCDs between 20-30% by engaging the application in a preventive healthcare intervention, which is aligned to global health equity goals.
Authors - Saurabh Nimje, Reena Satpute, Utkarsha Pacharaney, Anup Bhitre Abstract - Breast cancer is considered as one of the top causes of mortality on women across the world making early and accurate diagnosis a key element in addressing patient outcomes. The work introduces artificial breast instances of cancer detection techniques in ultrasound imaging by means of Contrast Limited Adaptive Histogram Equalization (CLAHE) and ensemble deep learning framework. Data used was a balanced data set comprising of 200 ultrasound images that are made to be benign, malignant, and normal. The CLAHE preprocessing was quite useful in terms of image quality as it provided edge and local contrast enhancement and profited letting the lesions be seen more effectively. A number of the convolutional neural network (CNN) architectures were tuned collectively in an ensemble arrangement with soft voting and weighted averaging, and this produced an improved classification performance. The proposed model returned an accuracy of 93.7%, sensitivity of 92.5%, specificity of 94.5% and AUC of 0.97 even better than the baseline general CNN models and the single CNN models with CLAHE. The findings are indicative of the fact that CLAHE-enhanced ensemble learning is a robust, reproducible, and promising tool in breast cancer detection within ultrasound imaging that holds a great promise in clinical.
Authors - Naga Sujitha Vummaneni, Srilakshmi Bharadwaj, Himani Varshney Abstract - The global healthcare landscape is currently undergoing a radical transformation, driven by the dual catalysts of the post-pandemic necessity for remote care and the rapid proliferation of digital infrastructure in developing economies. This research paper presents a comprehensive study on the design, development, and strategic positioning of a desktop-based "Healthcare Management System with Telemedicine." Developed using the Java ecosystem—specifically Java Swing for the graphical user interface (GUI) and Java Database Connectivity (JDBC) for persistence—the system integrates third-party WebRTC services via Jitsi Meet to facilitate real-time virtual consultations. Unlike purely administrative Hospital Management Systems (HMS), this solution integrates clinical workflows with administrative tasks, offering a unified platform for patient authentication, appointment scheduling, and remote video consultation. This report goes beyond technical implementation to provide an exhaustive analysis of the Indian digital health market, projected to reach USD 106.97 billion by 2033. It critically evaluates market leaders such as Practo, Zocdoc, and Teladoc to identify structural gaps in service delivery, particularly regarding cost-barriers and infrastructure dependency in Tier-2 and Tier-3 cities. By adopting the Prototyping Model of software engineering, the research iteratively addresses requirements for security, usability, and legacy hardware compatibility. The findings suggest that while cloud-native SaaS models dominate the current market, lightweight Java-based desktop solutions offer distinct advantages in data sovereignty, offline capability, and operational stability for resource-constrained healthcare settings. The paper concludes with a roadmap for integrating Artificial Intelligence (AI) for predictive diagnostics and expanding into mobile ecosystems, positioning the developed system as a viable component of the emerging Global Initiative on Digital Health (GIDH).
Authors - Nevil Dhinoja, Shubh Patel, Binal Kaka Abstract - Gradient conflicts, computational complexity, and optimization instability are some of the issues with model-agnostic meta-learning, or MAML. We introduce a methodical methodology that integrates three improvement techniques: meta-level regularization, adaptive optimization management, and taskaware gradient. By combining three complimentary mechanisms—task-aware gradient modulation, meta-level regularization, and adaptive optimization management—this work suggests an organized design framework to increase the stability and robustness of MAML-based optimization. The paradigm provides a solid basis for the methodical creation of more reliable and scalable meta-learning systems, even while empirical evaluation is saved for later research.
Authors - Liyan Grace Shaji, Lakshmi K.S, Shazil Mohammad Iqbal, Don Basil Saj, Tom Thomas Abstract - Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that affects skills related to social interaction and communication. Of late, this is estimated to be prevalent in 1 among 100 children, across the world. Unfortunately, our present, diagnostic methods, like ADI-R and ADOS, rely on questionnaires, which render them to be time-consuming, expensive, and skill-dependent. Hence, to address these challenges, FaceIt is developed, which is a Deep Learning-based diagnostic tool that integrates real-time image capture and classification for rapid and accessible ASD screening. The tool efficiently processes facial images captured or uploaded by users, by performing preprocessing steps like cropping and alignment. A Convolutional Neural Network (CNN) extracts facial features to detect ASD, while a Bayesian CNN captures uncertainty in predictions. Its user-friendly interface allows self administration, devoid of professional supervision. The faster and more accessible preliminary screening even facilitates timely follow-up diagnostics if needed, thus making this an optimum solution for widespread use.
Authors - Nabeela Kausar, Ramiza Ashraf, Naila Ashraf, Romana Ali Abstract - The conventional way of preparing an advertisement is an elaborate process incorporating human subjectivity and human resources heavily dependent on creativity. Making advertisements by human effort can be regarded as an inefficient utilization of capital for small to medium-scale businesses due to increased cost of production. Even in current advancements in the development of generative techniques including LLM-based strategies for Advertisement Generation with Prompts, creating apt prompts for the depiction of products requires human expertise, making them less accessible. In order to overcome the challenges presented by the current models, we introduce a fast, affordable, and scalable platform for the automation of advertisement generation for products leveraging the capabilities of pre-trained diffusion models. The proposed system requires no training or fine-tuning since everything is performed at the inference level. The AI-aware system for designing assists in the identification of color schemes and attributes from the images of the products, whereas the descriptions and categories of the items help identify the theme and pattern recommendations for advertisements. These recommendations are channeled through a pre-trained Stable diffusion model guided by the LLaMA language model.
Authors - Naga Sujitha Vummaneni, Ishan Kumar, Adarsh Mittal Abstract - Digital evidence is now central to cyber investigations, legal trials, and organizational audits. However, traditional evidence management systems rely heavily on centralized storage, making them vulnerable to unauthorized modifications, insider attacks, and in complete audit trails. This research introduces a Blockchain-Based Evidence Management System designed to secure digital evidence through immutability, transparent verification, and tamper proof storage of evidence hashes. The proposed solution integrates Java FX as a user-friendly interface, MongoDB for storing meta data, SHA-256 for generating unique evidence fingerprints, and the Polygon Mumbai blockchain for permanent registration of hash values. Users can upload evidence, verify its authenticity, and review all actions through a detailed activity log. Experimental results show that blockchain-backed verification reliably identifies tampered evidence and significantly strengthens the chain of custody. The system offers an efficient, scalable, and secure enhancement to traditional evidence-handling methods.
Authors - Naga Sujitha Vummaneni, Adarsh Mittal, Ishan Kumar Abstract - The rapid growth of digital platforms has transformed the way individuals buy and sell goods. However, college students still largely depend on informal and unorganized methods for peer-to-peer trading. This paper presents UNIBID, a Java-based online marketplace designed specifically for college students to enable secure, reliable, and efficient product trading within the campus community. The system allows users to register, authenticate, list products, browse products, search and filter items, and perform secure purchase transactions. The backend is implemented using Java [1], while database operations are handled using a reliable database management system. The proposed system eliminates the drawbacks of manual trading such as lack of trust, delay in communication, and absence of product verification. Experimental results show that UNIBID significantly improves transaction speed, transparency, and user convenience compared to traditional methods. The system is scalable, secure, and suitable for deployment in real academic environments.
Authors - Nishu, Kajal, Pavitra Jangir, Ayush Kumar Gupta, Kiran Dikshit, Ajay, Vishal Shrivastava, Akhil Pandey, Ram Babu Buri, Harveer Choudhary Abstract - Despite the availability of digital voting systems, prior studies continue to identify gaps such as weak or voter authentication, security vulnerabilities and insufficient fraud prevention mechanisms. This paper presents BotoSafe, a secure and user-centered electronic voting (e-voting) platform developed for student government elections within educational institutions. The system implements multifactor authentication (MFA) using one-time password (OTP) verification and facial recognition with an anti-spoofing mechanism. To ensure the confidentiality and integrity of the voting process we employ the Advanced Encryption Standard in Galois/Counter Mode (AES-GCM). A developmental research design with a quantitative approach was used for the system development and evaluation. A mock election involving 84 students from Western Mindanao State University–Pagadian Campus was conducted, followed by a post-assessment survey. Results from the System Usability Scale (SUS) yielded a score of 72.08, indicating acceptable usability. User responses further showed that the system is easy to use, safe, and trustworthy for student elections. These findings indicate that BotoSafe is a viable e-voting solution for student government elections and may be further enhanced in future studies.
Authors - Deepali Newaskar, Saurabh Parhad, Anjali Yadav, Siddhi Shinde, Samika Karne, Atharva Nangare, Misbah Shaikh Abstract - This study investigates the effectiveness of a student-driven development (SDD) approach utilizing ChatGPT to create SQL-based inventory management systems for Micro, Small, and Medium Enterprises (MSMEs), with a focus on contributing to Sustainable Development Goal (SDG) 12 (Re-sponsible Consumption and Production). A mixed-method study involving 30 student-MSME collaborations was conducted to evaluate the resulting systems based on stock accuracy, reporting efficiency, and user satisfaction. The quantitative results demonstrate significant performance enhancements, with systems achieving average scores of 7.7 for stock accuracy, 7.63 for reporting efficiency, and 8.17 for user satisfaction (on a 10-point scale). Technical analysis showed ChatGPT's pivotal role in input validation (15 cases), SQL query construction (8 cases), and report optimization (7 cases). The most frequent SQL commands were SELECT (14 instances), UPDATE (11 instances) and INSERT (5 instances), highlighting robust data handling. The findings confirm that integrating AI tools like ChatGPT within an SDD framework can deliver practical, scalable, and sustainable digital solutions for MSMEs, advancing digital trans-formation while reinforcing the applied role of higher education in achieving global sustainability goals. These results highlight the potential of student-led AI-assisted development as a scalable model for MSME digital transformation aligned with SDG 12.
Authors - Lavanya K, Srinidhi G A Abstract - The pace with which artificial intelligence (AI) has been adopted in decision-critical applications has, in turn, elevated the need to have more than merely accurate AI systems that are also transparent and comprehendible. Although the complex machine learning models can be highly predictive, its black box strategy creates a question mark on the aspects of trust, accountability, and usability in real-world systems based on artificial intelligence. This paper examines the tradeoff between accuracy and transparency in interpretable machine intelligence and oranges by pointing to the trade-offs that exist between predictive accuracy and model explanation. There is a proposed structured framework which is used for comparing and investigating the black-box and interpretable models on the basis of quantitative performance measures and explainability measures. The article highlights the importance of explainable AI methods of post-hoc in improving the transparency of models without affecting the accuracy of the model significantly. Using a systematic assessment, the paper shows that interpretable machine intelligence may be used to help make reliable decisions and maintain competitive predictive performance. The results help in the creation of credible AI-based systems as it provides information about the creation of models that are effective in balancing the accuracy and interpretability when applied to different application settings.
Authors - Dev Kumar Prajapat, Chakshum Mittal, Abhishek Sharma, Jatin Yadav, Mohammad Shaad, Vishal Shrivastava, Ram Babu Buri, Akhil Pandey Abstract - This paper presents Printify, a real-time, location-based service platform revolutionizing document printing workflows via a dual-interface architecture: a Flutter-based mobile app for end-users and a React/TypeScript Progressive Web Application (PWA) for shopkeepers. Addressing inefficiencies like delays, security vulnerabilities, and service discovery limitations, Printify leverages Firebase for instantaneous cross-platform state synchronization. The PWA utilizes Service Workers for offline functionality and secure protocols enabling paymentconditional document release. Evaluations show a 73% reduction in processing latency, 95% improvement in service discovery, and Lighthouse scores exceeding 92. The platform achieves PCI-DSS compliance and end-to-end encryption, establishing a novel hybrid mobile-web paradigm for location-based services.
Authors - Suresh Reddy, Immanuel Anupalli, P.Sudheer Abstract - This paper presents a comparative framework for detecting knee and elbow form errors in overhead press videos using machine learning. Using more than 2,000 videos from the Fitness-AQA dataset, three models are evaluated: an Inception-based Long Short-Term Memory (LSTM) network with residual connections, a custom stacked LSTM network, and a feedforward neural network baseline. Human pose keypoints are extracted using MediaPipe, and frame-to-frame differences are computed to encode motion dynamics. The dataset includes temporally localized annotations with explicit start and end timestamps for knee and elbow errors, resulting in a class-imbalanced classification task. Model performance is evaluated using accuracy, precision, recall, F1- score, and confusion matrices. Experimental results demonstrate that the Inception-based LSTM consistently outperforms the alternative architectures, followed by the custom LSTM, while the feedforward baseline performs substantially worse. These findings highlight the importance of temporal modeling and multi-scale feature extraction for fine-grained Action Quality Assessment in weightlifting.
Authors - Tanvir Ahmed Fahim, Md. Sohel Rana, Shamsul Arefin Bipul, Tanvir Hasan, Niyaz Mahmud MD. Mujahid, Hridoy Datta Abstract - The rapid expansion of Information and Communication Technologies (ICT) has transformed financial inclusion from a policy objective centered on access into a data-driven process mediated by digital identity systems, algorithmic credit assessment, and fintech platforms. While ICT-enabled financial inclusion promises efficiency, scalability, and outreach to marginalized populations, it simultaneously raises profound concerns relating to personality rights, including identity, dignity, autonomy, privacy, and reputation. This paper advances a normative and conceptual analysis of Personality Rights–Based Financial Inclusion through ICT, arguing that contemporary financial systems increasingly construct a digital economic identity that determines an individual’s financial opportunities and exclusions. Such identities, often generated through opaque algorithms and data profiling, risk reducing individuals to abstract data points, thereby undermining human dignity and meaningful self-determination. The paper develops a conceptual framework that positions ICT as the mediating layer between individuals and financial inclusion outcomes, with personality rights functioning as essential normative safeguards. Central to this framework is the articulation of the Right to Economic Self-Representation, which recognizes the individual’s entitlement to access, understand, contest, and contextualize their digital financial profile. By reframing financial inclusion as a rights-dependent process rather than a purely technological or developmental intervention, the paper highlights the dangers of algorithmic exclusion, permanent economic stigmatization, and surveillance-based inclusion. The study contributes to interdisciplinary scholarship at the intersection of ICT law, financial regulation, and human rights by proposing a rights-compatible model of inclusive finance. It argues that embedding personality rights into the design and governance of financial technologies is crucial to ensuring that financial inclusion operates as a mechanism of empowerment rather than control. The paper concludes that sustainable and legitimate digital financial inclusion must balance technological innovation with the preservation of human dignity and agency.
Authors - Janina Odette S. Vidallon, Apolinar P. Datu, Dominic T. Urgelles, Aljen B. Cabrera, Erika Joy F. Lagos, Lady Anne R. Logdat, Shenclaire A. Galero, Ericka Jean M. Amparo Abstract - Brain-computer interface systems can help people who are unable to communicate due to paralysis or severe motor disabilities. In this work, we im plemented an EEG-based P300 speller that allows users to select characters by focusing on a visual stimulus.The system functions by means of the P300 signal that appears when the user identifies their target character. We developed a com plete pipeline that includes feature extraction, machine learning model classifi cation, and preprocessing of EEG data. The system was tested using the BNCI Horizon 2020 P300 dataset, and the results showed that character selection accu racy ranged from 82% to 86%.Random Forest performed better compared to other classifiers in our implementation. The system was designed in a modular way so that future improvements can be added easily. This implementation shows that EEG-based communication systems can be developed using accessible tools and can support basic communication for people with severe motor impairments.
Authors - Sejal Vaishnav, Sanskrati Jain, Suman Dikshit, Vashvi Srivastava, Shailendra Sharma, Vaishnav Preeti Prakash, Vishal Shrivastava, Ram Babu Buri, Mohit Mishra Abstract - Traditional object detection systems are limited in their ability to capture the complexity of urban scenes, often overlooking critical spatial, contextual, and functional relationships required. This paper introduces Urban Scene Intelligence, a Semantic Anchor-and-Expand (SAE) framework that integrates multi-modal perception, structured scene graph construction, and controlled narrative generation to produce grounded descriptions of urban environments. The proposed modular architecture incorporates OWL-ViT for open-vocabulary object detection, SegFormer for semantic segmentation, DepthAnything for spatial depth estimation, Qwen2-VL for attribute enrichment, and OCR for extracting textual context. Unlike end-to-end multimodal models, the threestage pipeline explicitly separates visual perception, symbolic reasoning, and language generation, thereby improving interpretability and factual grounding. By unifying heterogeneous visual cues into a symbolic representation and generating context-aware descriptions from this representation, the SAE framework establishes a transparent and extensible approach to urban scene understanding in complex real-world environments.
Authors - Sara OULED LAGHZAL, Abdelmajid El Ouadi Abstract - Musculoskeletal disorders (MSDs) are a significant occupational health problem in the automotive industry [1].Manual and semiautomated assembly work often exposes workers to repetitive movements and non-neutral wrist positions. Conventional ergonomic assessments are often ad hoc and subjective, limiting their ability to capture positional variations and cumulative strain over time. This article proposes a framework for continuous improvement using artificial intelligence that combines a convolutional neural network-based classification of wrist position (CNN) and a rapid upper limb assessment (RULA)[2] in real time. The convolutional neural network distinguishes between acceptable and unacceptable wrist postures during task execution, and the RULA layer translates the posture data into standardised biomechanical risk indicators. Empirical tests in an industrial context have shown that the CNNRULA hybrid system reliably detects even subtle deviations in wrist position that are difficult to detect by visual observation. This enables comfortable, data-driven proactive interventions in an Industry 4.0 environment.
Authors - Darshika Dudhat, Riya Jagani, Sarita Thummar Abstract - Plant diseases represent one of the major threats for the world's food security and agricultural productivity. In this paper, we present a novel deep CNN model which is improved by the Squeeze-and-Excitation (SE) modules and the Attention Gates (AGs), for multi-class plant disease classification based on five crops including apple, maize, grape, potato, tomato. With large number of image data set and a well-designed training strategy, the established model demonstrates good performance in all aspects including 99% accuracy, 0.99 F1-score and excellent specificity. Exploratory studies are performed through feature visualization and Grad-CAM interpretability. The intense robustness and interpretability of the model give it high potential for practical agricultural applications. The main research methodologies of this paper have: • The proposed Method of Attention-based Deep CNN Model combines (SE) blocks and Attention Gates (AGs), which further improve the channel-wise spatial feature leaning for plant disease classification. • Proposes the Grad-CAM visualizations to show disease-specific regions on leaves and achieves the state-of-the-art performance on five representative crop disease classification tasks. •Introducing attention mechanisms greatly improved the model's ability to focus on disease-related features, as evidenced by its strong generalization performances across a wide array of disease classes.
Authors - Ekanand Mungra, Roopesh Kevin Sungkur Abstract - Today's increasing energy demand, particularly in developing regions, supports both economic growth and the improvement of living conditions. However, these regions experience power outages frequently, due to the high energy consumption of commercial buildings. This research examines energy usage in smart commercial buildings by analyzing data from in-building sensors, collected at ten-minute intervals for more than four months. The aim is to forecast the consumption of energy of these buildings while utilizing AI generated scenarios to generate simulations resembling real-life energy usage situations, thereby improving our model’s predictions. In the era of smart buildings, accurate predicting energy usage does not only facilitate cost savings for businesses, but it also presents an opportunity for revenue generation, particularly through the surplus energy supplied back to the grid from renewable sources such as solar panels. Unlike conventional approaches, this research employs MLPRegressor, a sophisticated model, to analyze and predict intricate patterns of energy usage from the sensor data. This research is particularly significant for advancing energy management strategies in commercial sectors of developing countries, promoting energy independence and efficiency.
Authors - Areej Almazroa, Sara Albahlal, Dalia Alswailem, Dhay Altamimi, Aljoharah Aldaej, Heba Kurdi Abstract - Monitoring marine litter is essential for planetary and human survival. This study proposes a novel framework integrating satellite data and big data analytics to assess marine litter distribution in coastal and oceanic environments. Leveraging open-source imagery from COPERNICUS Sentinel-2 and LANDSAT, the framework utilizes reflectance methodologies and image processing to identify and classify marine debris, focusing on spectral bands from visible blue (490 nm) to short-wave infrared (1610 nm). A pilot case study in San Diego, California, demonstrates the approach’s feasibility. The study explores the potential of microwave radiometry and machine learning for material detection and contour analysis, showing how satellite data can support dynamic and cross-platform monitoring systems. Results validate the use of remote sensing technologies to map plastic debris, providing a replicable methodology that combines emergent (e.g., satellites, drones) and traditional (e.g., sampling) techniques. This approach contributes to a deeper understanding of plastic pollution pathways, sources, and impacts across economic sectors. By generating harmonized data on mismanaged plastic waste, the study informs sustainability strategies and circular economy practices, helping redesign systemic plastic management and supporting local and global environmental governance.
Authors - Sonali S. Gaikwad, Jyotsna S. Gaikwad Abstract - In this semi-systematic literature review, a detailed study of the role of Human-Computer Interaction (HCI) in creating game-based solutions for Attention-Deficit/Hyperactivity Disorder (ADHD) among children is conducted. Six peer-reviewed research studies were selected. The study demonstrates that HCI can serve as a major therapeutic mechanism by transforming digital platform-based cognitive training into engaging, interactive experiences. These approaches not only improve focus but also enhance the overall effectiveness of interventions. Key findings from the analyzed studies are discussed, and future research directions are proposed, including multimodal hybrid systems with adaptive personalization and accessibility features to further improve outcomes for children with ADHD.
Authors - Bharathi A, Mohan Kumar P, Subha B Abstract - Rupture of an intracranial aneurysm results in catastrophic subarachnoid hemorrhage with a 30–40% fatality rate. Although treatment decisions are guided by clinical risk scores (PHASES, ELAPSS), recent research suggests that morphological analysis and computational fluid dynamics (CFD) may offer better rupture prediction. This study looked at 92 middle cerebral artery aneurysms from the CMHA dataset, which included 71 that had ruptured and 21 that had not. We evaluated four feature sets: Clinical-Basic (13 variables), Clinical-Scores (adding PHASES and ELAPSS; 15 variables), Scores and Morphology (24 variables), and Full (28 variables). We trained logistic regression models using 5- fold cross-validation with a 20% test set. We used bootstrap validation (1000 iterations) and Bonferroni-corrected feature importance analysis to reduce overfitting. The AUC for the Clinical-Basic set was 0.891±0.063. Performance was enhanced to a maximum AUC of 0.976±0.034 by adding PHASES and ELAPSS. The Full model achieved an AUC of 0.981±0.029, with neither morphological nor hemodynamic variables giving much further improvement. Significant variance was revealed by bootstrap analysis (95% CI: 0.764-0.998). At 90% specificity, the test set's AUC was 0.933, but its sensitivity was only 14.3%. The primary contributors were ELAPSS (F=143.2, p<10⁻¹) and PHASES (F=38.4, p<10⁻¹), whereas morphological and hemodynamic characteristics did not exhibit any significant correlations. Clinical scores demonstrated strong discrimination, but CFD-derived parameters offered minimal additional value in this small, imbalanced, single-center group. The wide confidence intervals and class imbalance limit clinical recommendations. Further validation in larger, multicenter studies is necessary.
Authors - Tirupathi Rao Dockara, Manisha Malhotra Abstract - AI and data platforms are increasingly expected to deliver end-to-end business automation under rapid market and regulatory change. However, prevailing platform construction strategies remain predominantly top-down: teams standardize a generic capability stack and subsequently customize it for heterogeneous domains through code, integration glue, and service forks. This approach amplifies technical debt, fragments governance, and makes continuous adaptation expensive. This paper introduces the Inverse Vertex Pyramid (IVP), a design pattern that reverses the direction of platform derivation. IVP begins at the use-case vertex by conducting rigorous analysis of high-value specialized automation scenarios and generalizes them into explicit, machine-actionable platform descriptors (metadata models, domain ontologies, policy/workflow specifications, and capability contracts) that form a stable, reusable core. Specialization is realized primarily via declarative configuration and policy changes, rather than code rewrites. We formalize IVP as a pattern, propose a reference architecture separating control and execution planes, and provide a comparative analysis against layered architectures, domain-driven design, and microservice platforms. A proof-of-concept walkthrough in regulated claims automation illustrates the generalization mechanism and highlights how IVP can reduce re-engineering, improve governance consistency, and accelerate time-to-market. The paper concludes with limitations, threats to validity, and a research agenda for automated use-case mining, formal verification of policies, and quantitative evaluation of platform agility.
Authors - Nishant Shah, Ansh Bajpai, Shrivaths S. Nair, Manas Verma K, Sabitha S Abstract - Digital accessibility in higher education is a key requirement to ensure the inclusion of students with hearing disabilities. However, institutional plat-forms often present barriers that limit autonomy, understanding of information, and full participation. The objective of this study was to evaluate the user experience of students with hearing disabilities on the EVIRTUAL, SGA, and SIS platforms of the Technical University of Manabí, identifying perceptions, accessibility barriers, and improvement proposals. A descriptive, exploratory study with a mixed-methods approach was conducted. The population consisted of seventy-eight students with hearing disabilities registered in the Inclusion Unit, from which an intentional subsample of ten participants was selected. A structured sur-vey with Likert-type scales and a participatory observation form were applied in real interaction situations with the platforms. Quantitative analysis was carried out using descriptive statistics, while qualitative information was organized into thematic categories. The results show that half of the participants achieve full autonomy in the use of the platforms, forty percent require intermittent support, and the rest need constant assistance. Regarding clarity of information and con-tent comprehension, intermediate responses predominate, which reveals recur-rent difficulties. The main barriers identified were a confusing interface, non-intuitive navigation, insufficient visual supports, and the need for external assistance. The study proposes improvements such as customizable subtitles, step-by-step visual guides, an accessibility button, a sign language interpreter avatar, and optimization for mobile devices, aimed at strengthening autonomy and user experience.
Authors - Sabarishwaran V, Gomathi K, Andey Phani Vinay, Jagadeeswaran V, Ranjith Kumar M Abstract - The rapid expansion of digital commerce platforms has significantly transformed on- line transactional systems; however, conventional centralized architectures continue to face critical challenges related to security, transparency, data integrity, and trust management. Traditional e-commerce systems rely heavily on centralized databases, making them vulnerable to data tam- pering, unauthorized access, fraudulent transactions, and single points of failure. To address these limitations, this paper proposes a secure, scalable, and modular web-based e-commerce system that is architecturally designed for integration with blockchain technology and smart contracts. The proposed system is implemented using widely adopted web technologies, with a responsive frontend and a robust backend to support essential functionalities such as user authentication, product catalog management, shopping cart operations, order processing, inventory management, and administrative control. The architecture emphasizes separation of concerns, enabling flexibility, maintainability, and future extensibility. A key contribution of this work lies in the incorporation of a blockchain-ready framework that enables immutable transaction recording and enhanced trace- ability across the entire transaction lifecycle. Smart contracts automate transaction validation and order execution. The system also introduces an AI-based anomaly detection mechanism using a Deep Q-Network to detect fraudulent behavior. Experimental validation demonstrates reliable per- formance and scalability.
Authors - Sowmyashree N, Madhu Sunkanur, Impana M, Suchithra B S, Hemalatha P G Abstract - The existence of a growing social media has created complex cyber systems in which vast quantities of interactions constitute substantial issues regarding misinformation, privacy invasion, deception of identities, and destructive behavioural tendencies. The regularity of involvement in this type of big systems requires sophisticated systems that are able to judge the motive of the user, content validity and suspicious activities within real time. Overall interest will be to develop a universal trust calculation system that will be more secure and effective in ensuring privacy and increasing the accuracy of suspicious or malicious users in social sites. The proposed Multi-Layer Federated Trust Framework algorithm is a combination of peer-based user reputation scoring, feature-based content authenticity detection, federated trust indicators aggregation, and anomaly detection with the help of behavioural anomalies. These approaches cooperate with secure aggregation and decentralized learning in removing the uncoded information exposure and enable the computation of trust at scale. The proposed algorithm is experimentally confirmed, and the obtained results are 95.2, 94.1, 93.5, and 93.8, corresponding to a minimum latency of 65 ms and a privacy preservation score of 0.98. The general results indicate a viable and holistic response that adds to secure interactions, blocks malicious acts and encourages trust in the actual social media settings.
Authors - Md. Shahidul Islam, Hasina Islam Abstract - Cross-domain recommendations are imperative in the growing tourism industry and with the increasing means of communication. Preference drift, preference transfer, and unfamiliarity with places have an overbearing impact on recommender systems. Most approaches do not address geometric misalignment across domains, which is essential for cross-domain preference shift analysis in recommendation tasks. We propose Procrustes-Based Contextual Thompson Sampling (P-CTS) for Cross-Domain POI Recommendation, integrating adversarial domain-invariant learning, optimal geometric alignment via Procrustes transformation, and adaptive Thompson Sampling with sleeping bandit management. First, the embeddings are constructed to model the preference drift across the domains. Next, the Procrustes transformation aligns source and target embedding spaces via optimal rotation, scaling, and translation. In the last phase, we initialize Beta priors with similarity-weighted pseudo-counts derived from the aligned embeddings. The experiments on Gowalla and Foursquare across domains demonstrate 5.1% improvements in Precision@5 and 9.75% improvements in cold-start accuracy, suggesting an adaptive exploration-exploitation trade-off.
Authors - Binh Pham Nguyen Thanh, Long Duong Phi, Phung Thi-Kim Nguyen, Nhan Thi Cao Abstract - The rapid proliferation of Internet of Things (IoT) devices has significantly increased the digital attack surface, which, in turn, has raised network vulnerability to sophisticated Distributed Denial of Service (DDoS) campaigns that could reduce the effectiveness of traditional signature-based Intrusion Detection System (IDS). Furthermore, conventional Machine Learning (ML) approaches are often subject to manual feature engineering and lack the capture of complex spatial and temporal dependencies, which are essential to detect subtle, polymorphic threats. In this regard, the present work proposes a lightweight hybrid Deep Learning (DL) architecture for reliable (DDoS) detection. The proposed approach integrates spatial feature extraction using a Convolutional Neural Network (CNN) with a Bidirectional Long Short-Term Memory (BiLSTM) network to capture temporal correlations, further enhanced by an additive attention mechanism that underlines the importance of flow segments relevant to recognition. To mitigate issues with computational complexity, a two-phase hybrid feature selection approach, a combination of Information Gain (IG) and Dynamic Particle Swarm Optimization (PSO) would be utilized to select an optimal subset of features. The performance of the model was evaluated using the CICDDoS2019 benchmark dataset. The feature selection process was able to reduce the input space from 80 to 17 relevant features. The combined CNN-BiLSTM model, along with threshold optimization, was able to achieve an accuracy of 94.1%, which indicates a significant improvement in the reduction of false negatives and validates the effectiveness of the proposed method in a secure IoT environment.
Authors - Wani Zahidah Mohd Subari, Shuzlina Abdul-Rahman, Mohamad Faizal Ab Jabal, Sharifalillah Nordin Abstract - Role-playing games (RPGs) allow the player to take on a specific role and complete different missions during gameplay. Their diversity enables a range of ap-plications beyond entertainment, as they are often used in educational contexts. Learning content can be embedded in common components, such as game fields, tasks, objects, or non-playing characters (NPCs). The paper presents several educational RPGs with their features and characteristics, and existing models of didactic video games. It proposes a two-level metamodel for describing an educational RPG. The metamodel is divided into five main components (world, educational aspects, quest, playing character, and NPCs), and their taxonomies are presented briefly. The authors propose a conceptual model that includes the interrelationships among the components mentioned. In addition, their interpretations and significance for the development of RPG educational games are explained. An example of the metamodel is represented through a quest from a real educational RPG in the field of Chemistry. The presented RPG metamodel improves under-standing and helps to better design, develop, and integrate such games into various learning environments. The presented taxonomy can serve as a useful template for structuring design details.
Authors - Abhishek Chaudhari, Mahalakshmi Bodireddy, Aditya Bhor, Onkar Dadas, Prajakta Shinkar, Chinmay Chougule Abstract - The growing mental health challenges around the globe need access to scalable, available, and safety conscious digital interventions. The paper describes a mental health support platform, based on AI, which combines conversational intelligence, multi-therapeutic persona modeling, structured mood analytics, proactive crisis identification, multi-lingual interaction, and voice-based access in a secure full stack design. The system, which runs on the Google Gemini AI, provides context-sensitive therapeutic dialogue and performs four-dimensional mood analysis of anxiety, stress, depression, and wellbeing, allowing longitudinal assessment by providing interactive dashboards and automated reporting. A safety-first crisis override system offers validated emergency capacity in the high-risk situations. The platform also includes multilingual voice feedback to facilitate inclusion of the visually impaired users and non-English speaking communities in providing inclusive digital mental health care. The proposed system is capable of changing the prevalent perception that AI and its applications may never be responsible and scalable because it integrates therapeutic diversity, structured analytics, accessibility features, and proactive safety controls into a single framework.
Authors - Anvar Saidmakhmudovich Usmanov, Mikhail Borisovich Khamidulin, Shakhlo Rustamovna Abdullaeva, Fazilat Dzhamoliddinovna Akhmedova, Shoh-Jakhon Khamdаmov Abstract - This paper presents a data-driven forecasting and anomaly detection dashboard for live births in Surigao del Norte, utilizing the Family Health Service Information System (FHSIS) data from 2021 and onwards. The research methodology is based on the CRISP-DM framework, with business under-standing for the needs of maternal services planning in the provinces and municipalities, data preparation for municipalities by quarters, time aware modeling, evaluation, and deployment through the API and visualization layer. The research employs several machine learning techniques for forecasting, such as ARIMA/SARIMA, Exponential Smoothing (ETS and Holt-Winters), and the Prophet method, along with a naïve method. The performance of the models is evaluated through the symmetric Mean Absolute Percentage Error (sMAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Scaled Error (MASE). A strict evaluation criterion for the deployment of the model is also implemented, such as the availability of sufficient data points in the past for the model to be deployed (i.e., 12 data points in the past), the accuracy of the model (sMAPE < 20%), and the performance of the model in comparison with the naïve method (MASE < 1). A low confidence filter is also implemented for the series with intermittent data to prevent incorrect results. The results show high reliability of the forecasting model for the entire province and better interpretability for strategic planning. However, the results also show that some of the municipalities with low population volumes and intermittent data points pose a challenge in the operation of the model.
Authors - Michele Della Ventura Abstract - Feature representations that are both high-dimensional and reduce redundancy often prove to be significant constraints on the performance of object detection. In this study, we present the first hybrid metaheuristic feature selection framework that combines the enhanced grey wolf optimizer (EGWO) and firefly algorithm (FA) with a deep learning-based detection pipeline. The proposed EGWO-EFA method for identifying useful and compact feature subsets has been shown to reduce dimensionality by over 99.99% on the Pascal VOC and Brain Tumor M2PBP datasets. The experiments conducted demonstrate that, compared to classical feature selection, this method has an improved F1-score and precision, by an average of 2%. In addition, the overall pipeline execution time is considerably shorter. These results show that hybrid metaheuristic optimization is an effective approach to scalable and efficient object detection for high-dimensional feature representations.
Authors - Roshna Dhakal, Khanista Namee Abstract - Modern railway system increasingly rely on digital technologies such as Communication-Based Train Control (CBTC), European Train Control System (ETCS) and Supervisory Control and Data Acquisition (SCADA) systems, raising significant cyber-security challenges. We have seen 220% increase in attacks over five years from opportunistic ransomware to sophisticated targeted threats. This paper provides an overview of railway cybersecurity and surveys the coverage area considering ICT architectures, cyber threat models, and AI-based defense approaches. 75% of cases employed Distributed Denial of Service (DDoS) tactics while ransomware had affected 54% of the OT environments. We describe a comparative taxonomy of Artificial Intelligence and Ma-chine Learning approaches including the methods based on supervised learning, unsupervised learning, and advanced deep learning practices with detection accuracy as high as 97.46%. However, there exist several challenges: few available public data sets, lack of validation in real-world scenarios, demands for explain ability from that AI system and worries about adversarial robustness. We discuss eight potential research gaps, and future directions focusing on federated learning, digital twin development, multimodal AI fusion and safety-security co-engineering frameworks.
Authors - Bhonsle Rashmi Ravindra, Shankar Chaudhary, Shivoham Singh, Hemant Kothari, Raj Kothari Abstract - Urban metro rail systems are the key to urban sustainable mobility; however, in spite of the developed technologies, projects regularly experience delays and contractual disputes. These perceived challenges are highly attributed by prior scholarship to matters of the execution phase and restricted illumination is given on the institutional circumstances that form system performance in ICT intensive infrastructure. This paper examines procurement strategy as a govern ance tool that affects the results of digital system integration and sustainability in Indian metro rail projects. Based on statutory performance audit reports and com parative case studies, the analysis indicates that fragmented procurement arrange ments fragment the integration functions to several contracts, leading to coordi nation failure, delayed commissioning, and high claims. Instead, the more coor dinated procurement models with consolidated interdependent systems and de fined integration roles have a better coordination structure and predictable deliv ery. The results indicate that the problem of metro project integration is more of an institutional than a technological problem. This research study adds to the body of knowledge on infrastructure governance by noting the design of procure ment to be one of the determinatives in the realization of effective and sustainable urban transit outcomes.
Authors - Irmawan Rahyadi Abstract -This research investigates the digital footprint of mental health infor mation as it circulates on YouTube. Using a qualitative content analysis ap proach, the study examines 100 selected videos in conjunction with social media analytics to identify recurring patterns in the dissemination of mental health dis course. The findings reveal a mix of misleading or incomplete claims, educa tional resources, personal narratives, and recovery-oriented content, illustrating how mental health discussions shape and amplify user perspectives at both broad (macro) and specific (micro) levels within the evolving field of e-health. To in terpret these dynamics, the analysis applies Gibson’s theory of transactional af fordances, which illuminates key themes of risk, relevance, lived experience, credibility, and social support. By situating these themes within the broader con text of video-sharing platforms, the study underscores the importance of YouTube as a platform for mental health communication. It underscores its role in broader public conversations about health in the digital age. The future re search should investigate mental health discourse from other social media users.
Authors - Ahir Jaimi, Niyati Patel, Nirav Bhatt Abstract - This research studied the economic impact and perceptions of air pollution, particularly PM2.5, in Chiang Mai Province, Thailand, using the Multiple Indicators Multiple Causes model (MIMIC model) and Mixed Data Sampling Regression (MIDAS model). The MIMIC model analyzed data from questionnaires administered to 5 0 7 respondents and examined factors influencing public perception of hotspots and PM2.5. The MIDAS model analyzed the impact of monthly PM2.5 levels and monthly hotspot counts on quarterly Gross Provincial Product (GPP), using data from 2019 to 2023.The MIMIC model analysis revealed that perception of burning or activities causing hotspots was the most influential factor in determining public perception of the impact of PM2.5. The effectiveness of government efforts to address the pollution problem had a negative correlation, while demographic and socioeconomic characteristics showed no statistically significant impact. This indicates that public perception is more influenced by received information or education than by personal characteristics. The MIDAS model highlighted the economic impact of hotspots and air pollution. The analysis results indicate that When hotspots or burning occur, these activities have a statistically significant positive impact on the province's GPP. A 1% increase in hotspots is correlated with an approximately 0 .14% increase in quarterly GPP, suggesting that economic activity or agricultural burning may lead to increased economic activity and consequently a short-term increase in GPP. Conversely, a decrease in PM2.5 concentration in the previous month resulted in an approximately 0.47% decrease in quarterly GPP, demonstrating that the economic costs of air pollution occur with a delayed effect rather than simultaneously. Therefore, this research highlights the importance of the correlation between short-term economic benefits and polluting activities, as well as the delayed economic losses resulting from poor and toxic air quality. This research emphasizes the importance of air quality management, risk communication and support, and economic and environmental policies to address the long-term economic and social impacts of PM2.5 pollution.
Authors - Aditya Nova Putra, Budi Riyanto, Alda Chairani, Sandy Dwiputra Yubianto Abstract - This study examines the determinants of continuance intention in YouTube live streaming consumption among Indonesian Generation Z, focusing on social interaction, entertainment, passing time, and enjoyment. Drawing upon Uses and Gratifications Theory and Computer-Mediated Communication, this research situates live streaming as an interactive digital environment where audiences actively negotiate social and emotional experiences. A quantitative explanatory survey was conducted among 108 Generation Z subscribers of the Windah Basudara YouTube channel, and the data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The findings reveal that social interaction and passing time significantly influence continuance intention, whereas entertainment and enjoyment do not demonstrate significant effects. These results suggest that sustained engagement in live streaming environments is driven more by interactive and habitual gratifications than by purely hedonic motivations. By highlighting the contextual dynamics of Indonesian gaming live streaming, this study extends the application of Uses and Gratifications Theory in synchronous digital media settings and offers practical implications for content creators seeking to strengthen audience retention strategies.
Authors - Steveen Eduardo Pinzon Morales, Yandry Jose Olarte Sancan, Marely del Rosario Cruz Felipe, Maricela Pinargote-Ortega Abstract - The recent decade has witnessed a more increase on the impact of applying and implementing green computing which mainly focuses in protecting the overall nature of the environment. Within the scope of this comprehensive assessment of the relevant literature, the most recent advancements in energyefficient software design, sustainable hardware design, and improved algorithms are examined and compiled. A wide range of enterprises use cloud computing for its adaptability, reliability, speed, and cost-effectiveness. The proliferation of cloud computing is affecting a shift in the manner in which we network. The application of these new technologies are mainly focused on the overall protection of the environmental aspects, they are more targeting in reducing the emission of dangerous type of gases and substances, use renewable mode of energy and thereby focusing in protecting the world for the future generations. The article is mainly involved in understanding the overall nature of implementing the green computing in realizing the overall development aspect.
Authors - Ain Geuel E. Escober, Rosicar E. Escober, Demelyn E. Monzon Abstract - This study presents the development of StewardFM, an information management system designed to evaluate the effectiveness of the Deflate compression algorithm in optimizing storage for associations and small organizations with limited cloud VPS resources. By integrating membership, event, collection, and budget management into one platform, StewardFM reduces storage overhead while maintaining essential functionality, offering a cost-efficient and scalable solution for resource-constrained organizational environments.
Authors - Shabnam Praveen, Shubham Kumar, Tulika Roy, Sanskriti Sahu, Subhangi Raj, Ranjita Kumari Dash Abstract - The implementation and design of a covert communication channel that embeds hidden information within TCP/IP packet headers rather than within the actual payload of the packets is presented as a project. This is different than traditional embedding methods (steganography), which typically embed data into multime dia files, in that steganography in this case utilizes header fields that are not cur rently in use or can be modified so that TCP/IP packets can transmit hidden data. The fields that are used to transmit hidden data are the IP Identification Field, TCP Sequence Number, TCP Acknowledgment Number, and TCP Window Size. The sender module encodes and generates packets, and the receiver retrieves packets, extracts encoded bits, and reassembles data from the encoded bits found in the packets. The integrity of the data is verified using a checksum (SHA-256) and packet loss is reported. The lack of a payload will further enhance the stealth various data transmission methods may enjoy as it will circumvent conventional intrusion detection techniques (which primarily examine the payload data within packets). This project will demonstrate the ability to use this or similar covert communication channels to implement covert communication systems. In addi tion, covert communication channels can be used for different types of files and demonstrate the security and educational value of covert channel research in net work security.
Authors - Prajakta Shinkar, Madhuri Suryavanshi, Sakshi Satav, Mahima Thakre, Saisha Chaudhary Abstract - The contemporary academic and professional world requires smart systems of productivity that are not limited to the old task managers. The paper introduces an intelligent personal productivity assistant powered by AI that consists of generative AI, dynamical schedule, behavioral analytics, and gamification using a mobile-first structure. The system is based on a Flutter frontend and FastAPI backend and a hybrid AI architecture to create conversational tasks and understand their context. A burnout detection module is a behavioral module that analyzes workload trends, tasks owed and completion trends to give early risk alerts. A smart scheduling system aggressively plans on a daily basis with priority-based model, conflict resolution and Pomodoro-based segmentation. The proposed system combines conversational AI, predictive analytics, and motivational reinforcement to increase productivity and decrease cognitive load and help avoid burnout in managing tasks.
Authors - Ashvini Jadhav, Pankaj Chandre Abstract - The contemporary academic and professional world requires smart systems of productivity that are not limited to the old task managers. The paper introduces an intelligent personal productivity assistant powered by AI that consists of generative AI, dynamical schedule, behavioral analytics, and gamification using a mobile-first structure. The system is based on a Flutter frontend and FastAPI backend and a hybrid AI architecture to create conversational tasks and understand their context. A burnout detection module is a behavioral module that analyzes workload trends, tasks owed and completion trends to give early risk alerts. A smart scheduling system aggressively plans on a daily basis with priority-based model, conflict resolution and Pomodoro-based segmentation. The proposed system combines conversational AI, predictive analytics, and motivational reinforcement to increase productivity and decrease cognitive load and help avoid burnout in managing tasks.
Authors - NamUook, Kim, Gihwan Bong, Yoon Seok, Chang Abstract - The heterogeneity of data sources makes the design of traditional da ta ware-houses complex and time-consuming. Indeed, the data warehouse system must process structured, semi-structured, and unstructured sources. To over-come this challenge, we propose an interactive approach to data ware house design based on a federated ontology. The ontology serves as a unified conceptual layer that integrates heterogeneous data sources and facilitates the building of the data warehouse. Our approach allows decision-makers to in teractively select the subdomain of the federated ontology according to their needs and generate their data warehouse. The generation of the data ware-house in the constellation schema is automated using algorithms. It also ensures the maintenance of the data warehouse to take into account various changes in data and decision-makers' needs. The proposed methodology is summarized through architectures defined at each stage, each addressing a specific challenge. At the ontology construction level, it resolves issues related to data heterogeneity while enabling interoperability among multiple do-main ontologies. It also provides a complete scenario for the decision-maker to assist in the full construction of a data warehouse from an ontology. Finally, it facilitates querying the constructed data warehouse using requests ex-pressed by the decision-maker in natural language.
Authors - C Nitheeshwaran, M Saravanan, S Mukesh, K S Anuvarshini Abstract - The present study explores the online privacy concerns of young Indian consumers. Using the segmentation approach popularized by Dr Alan Wes-tin in the U.S., this study identifies the segments within Indian youth. This study is based on a survey conducted on a sample of Indian university students. Hierarchical and non-hierarchical cluster analysis techniques were applied to identify segments within young Indian consumers based on their privacy concerns. The study identified three consumer segments: highly concerned, moderately concerned, and less concerned based on online privacy concerns. The findings also reveal important differences among the three segments in terms of out-come variables such as perceived effectiveness of legal/regulatory policy, fabricating personal information, and software usage for protection. The results indicate an overall increased level of concern for online privacy among young Indian consumers. The results suggest similarities and dissimilarities with Westin’s approach. While previous research on online privacy has been chiefly based on the Western context, this study offers a window to look at the Eastern context by examining the privacy concerns of young Indian consumers, who have not been studied, and hence provides an important contribution to the existing literature.
Authors - Quang-Thinh Bui, Lan T.T. Tran Abstract - The digital transformation of the construction industry has intensified the demand for standardized methods of information exchange. Building Infor mation Modeling (BIM) has become a cornerstone of this transformation, ena bling interdisciplinary collaboration and improving data quality. However, recur ring challenges such as inconsistent data structures, unclear contractual require ments, and limited interoperability continue to hinder efficient project delivery. To address these issues, the Information Delivery Specification (IDS) was devel oped within the buildingSMART ecosystem as a computer-interpretable standard for defining and validating information requirements. Officially approved in June 2024, IDS bridges human-readable requirements with machine-interpretable val idation rules, positioning itself as both a contractual instrument and a technical validation tool. This study synthesizes insights from official IDS documentation and academic literature to provide a comprehensive evaluation of IDS’s role in the construction sector. The systematic literature review categorizes contributions into five the matic domains: standardization, application scenarios, systematic reviews, coun try and domain-specific studies, and methodological innovations. Findings high light IDS’s versatility across diverse applications, including acoustic assessment, accessibility compliance, railway projects, and energy simulation. At the same time, research gaps remain in areas such as national adaptation strategies, auto mated compliance checking through CI/CD pipelines, and methodological devel opment via linkage with the Level of Information Needs (LoIN). By integrating theoretical perspectives with practical case studies, this research demonstrates how IDS functions as both a technical standard and a methodolog ical framework. The study concludes that IDS has the potential to become a cor nerstone of digital construction practices, bridging regulatory requirements with automated validation in BIM workflows.
Authors - Anuja Kelkar, Pradnya Kardile, Aditi Dudhe, Prajakta Chaudhari, Meenal Kamlakar Abstract - In this paper we derive a new estimate of the channel bit rate. The estimates is a special transformation of the main EVT theorem that is particularly designed for use in telecommunication automated systesm meaning it’s robust to noise, computationally cheep, needs very few data points and no manual validation. Due to the EVT methodology we can evaluate if the bit rate can keep dropping indefinitely or if it has a guaranteed minimum value. The method is relatively fast because it uses Newton’s interpolation instead of hypothesis testing or regression.
Authors - Ankur Maurya, Shaurya Oberoi, Madhav Malhotra, Rakesh Chandra Joshi, Garima Aggarwal, Malay Kishore Dutta Abstract - Remote sensing imagery plays an important role in applications such as environmental monitoring, disaster management, urban planning and agricultural analysis. However, the spatial resolution of such imagery is often limited by sensor constraints, revisit frequency and acquisition cost. To address this challenge, this paper presents RCAN-RS, an enhanced Residual Channel Attention Network for remote sensing image super-resolution. The proposed model extends the RCAN framework through three targeted modifications: a dual-pooling channel attention mechanism, a spectral attention module and an edge enhancement module. These components are designed to improve detail reconstruction while preserving inter-channel consistency and sharp structural boundaries in remote sensing imagery. The model was trained and evaluated on the DOTA dataset un-der a 2× super-resolution setting from 256 × 256 to 512 × 512 pixels. Quantitative evaluation using both conventional image-quality metrics and remote-sensing-oriented measures shows that RCAN-RS achieves a mean PSNR of 34.42 dB, SSIM of 0.9398, Edge Preservation Index of 0.9524, ERGAS of 6.68 and UQI of 0.9846 on the test set. These results demonstrate the effectiveness of integrating attention-guided and edge-aware mechanisms for remote sensing image super-resolution.
Authors - Inuka Gajanayake, Gagani Kulathilaka, Guhanathan Poravi, Saadh Jawwadh Abstract - The swift growth of digital interfaces has facilitated manipulative design practices called dark patterns, which take advantage of cognitive biases to manipulate users and subvert informed decision-making. Though widespread across e-commerce, social media, and other areas, automated identification and empirical knowledge of user vulnerability are still in their infancy. This work introduces an end-to-end framework integrating a GenAI-augmented browser add-on for real-time detection of dark patterns with systematic estimation of user awareness and behavioral reactions. A new Pattern Vulnerability Index (PVI) measures the threat from individual patterns according to frequency, unawareness among users, and potential damage. Cross-platform analysis identified high-risk patterns like Discount Anchoring, Urgency, and cost-related manipulations to be frequently overlooked by users. Clustering identifies scenarios in which several deceptive patterns occur in co-presence, including checkout processes, promotional displays, and subscription pitfalls. The results highlight the moral significance of manipulative interface design and establish the capability of machine-based tools to promote user safeguard, sensitize, and guide regulation and design efforts. This study provides a basis for consumer-oriented solutions and future research towards more transparent and ethical online encounters.
Authors - Hiep. L. Thi Abstract - As a core pillar industry in China's economic transformation toward a service-oriented economy, the tourism industry plays an irreplaceable role in boosting domestic demand growth, optimizing regional industrial structures, and advancing high-quality economic development. The Dazu Rock Carvings in Chongqing, holding the dual top-tier qualifications of a World Cultural Heritage site and a National 5A Scenic Area, embody over 1,300 years of historical accumulation. With their unique cultural core of ‘Confucian-Buddhist-Taoist Syncretism’ and top-tier high-relief artistic craftsmanship, they stand as the pinnacle of Chinese stone carving art, boasting remarkable cultural tourism economic value and cultural inheritance value. However, for a long time, the Dazu Rock Carvings have been trapped in the dilemma of ‘high cultural value but low market recognition’—acclaimed but underrecognized in the market. Their visibility enhancement relies excessively on short-term hotspots, lacking a long-term support mechanism. Based on theories of culture-tourism integration, brand communication, and sustainable cultural heritage development, this paper employs literature review, data analysis, case comparison, and field research to accurately identify core pain points. It constructs a scientific and feasible new marketing path from six dimensions: innovative resource transformation, precise audience cultivation, diversified channel expansion, upgraded cross-border linkage, breakthrough international communication, and long-term institutional safeguards. This path aims to help the Dazu Rock Carvings transition from traffic-dependent development to value-driven development and, at the same time, provide practical references for similar cultural heritage scenic spots in China.
Authors - Fredy Gavilanes-Sagnay, Edison Loza-Aguirre, Luis Castillo-Salinas, Narcisa de Jesus Salazar Alvarez Abstract - Ayurveda, India's ancient system of medicine, is full of inter-connected knowledge about diseases, their symptoms, herb and formulation (compounds). However, texts such as Charaka Samhita are mostly unstructured and cannot be readily analysed computationally. This work presents AyurKOSH which is a machine-readable, high-quality Ayurvedic dataset that is designed as a Knowledge Graph (KG) in order to support Artificial Intelligence driven research. The dataset is represented as subject–predicate–object triplets, which enables semantic interoperability, graph traversal, and multi-hop inferencing across entities. The dataset is designed by following schema-driven ontology which standardizes relationships between various nodes such as diseases, symptoms, pharmacological attributes, and compound formulations. DB Schema ensures consistency and computational tractability. AyurKOSH has the structured data of diseases and related symptoms, drug preparations, herbs and the detailed pharmacological properties are Rasa, Guna, Virya, Vipaka, Karma. The graph structure shows real-world biomedical network characteristics such as high sparsity and low average degree, which makes it suitable for embedding-based learning, graph neural networks, and explainable AI frameworks. Moreover, there is botanical metadata and herb-substitution relationships added for the prediction of synergy and repurposing of drugs. The dataset facilitates applications in biomedical NLP, and automated reasoning systems and clinical decision assistance, and pedagogy in integrative medicine. AyurKOSH became available for academic and non-commercial research under CC BY-NC-SA 4.0 license.
Authors - W M I T Warnasooriya, T D Jayadeera, A M G S Adhikari, M A F Zumra, A J Vidanaralage, M Samaraweera Abstract - The integration of large language models (LLMs) into primary educa tion remains limited in low resource, diglossic languages like Sinhala. General purpose models often produce grammatically inconsistent or cognitively over whelming output for young learners. This paper introduces a grade-adaptive, con straint-driven framework for automated Sinhala story and quiz generation target ing Grades 1-5. Building upon an 8-billion-parameter Sinhala-adapted LLaMA 3 model, we apply Quantized Low-Rank Adaptation (QLoRA) using a curated multi-task educational dataset. The system enforces tier-specific linguistic con straints separating conversational Sinhala for lower grades from formal written Sinhala for upper grades while embedding strict structural rules such as con trolled sentence counts (5-6 vs. 7-8) and validated multiple-choice formats (3 vs. 4 options). Evaluation on 100 structured prompts demonstrated substantial im provements over a zero-shot baseline: structural compliance increased from 64% to 93%, and hallucination-related failures decreased from 31% to 8%. Further more, evaluation against 50 unseen real-world classroom prompts yielded a 0.0% crash rate and 95% register adherence, confirming robust qualitative perfor mance. Results demonstrate that diglossia-aware dataset engineering and con straint-aware fine-tuning enable reliable, pedagogically aligned deployment of LLMs in low-resource primary learning environments.
Authors - S. M. Mizanoor Rahman Abstract - Removable USB storage devices are widely used in day-to day computing, but they also introduce risks such as unauthorized data transfer and misuse of external media. Understanding how these devices are used on a system is important during forensic investigations, espe cially when analyzing potential data leakage incidents. On Windows sys tems, traces of USB activity are not stored in a single location. Instead, they are distributed across registry entries, system logs, and file system records. Examining these sources individually often makes it difficult to form a clear picture of events. This paper introduces a forensic frame work that brings together USB-related artifacts from multiple system components and analyzes them in a unified manner. The method gath ers data from sources such as registry entries, Plug-and-Play logs, and f ile system structures, and then aligns them based on their timestamps. A Python-based implementation is used to automate this process and to relate device connection events with file operations. Experiments con ducted on a Windows setup show that the framework can identify device usage and reconstruct the sequence of related activities with clarity. By combining evidence into a single timeline, the approach helps simplify analysis and supports consistent interpretation of results.
Authors - Shamita Jagarlamudi, Soormayee Joshi, Aman Aditya, Anushka Gangwar, Pratvina Talele Abstract - Federated Learning (FL) is a privacy-preserving, distributed learning framework where models are trained locally on client devices, and only the trained parameters are shared with a central server. Nevertheless, FL encounters substantial obstacles in real-world applications due to data heterogeneity, such as non-IID distributions leading to local inconsistencies and client drift thereby diminishing global model efficacy. To tackle these challenges, we propose a Federated Prox Drift Correction (FedPDC), an effective and practical method designed to mitigate client drift and local overfitting through the use of drift correction and proximal terms. Comprehensive experiments conducted on public datasets demonstrate that FedPDC performance is superior compared to state-of-the-art methods.
Authors - U. A. Walke, G. A. Kulkarni, Pranav Mungankar, Om Kale, Tejas Kadam Abstract - Digitizing damaged historical texts requires multiple processing steps that can propagate semantic noise through the workflow. Efforts have been made to improve the recognition, correction, and normalization steps of the pipeline, but few studies have quantified model-level effects in isolation under a controlled architecture setup. Here we present Probanza, an extensible staged evaluation framework that decouples preprocessing normalization from semantic modeling to facilitate clean comparisons between LLMs. We perform super-resolution, contextual correction, and historical normalization before English translation. We selected 30 total degraded pages from the Florentine Codex and digitized them with three LLM configurations: GPT-5, GPT-4o, and Gemini 3 Flash. Co sine similarity was computed between model predictions and archival baseline translations to measure semantic accuracy. A one-way repeated-measures ANOVA was done to examined differences across configurations. The analysis revealed a significant main effect of LLM configuration. Gemini 3 Flash pro duced the highest mean similarity (M = .881, SD = .075), while GPT-5 (M = .783, SD = .147) and GPT-4o (M = .769, SD = .135) which were not significantly dif ferent from one another. Our results demonstrate that significant differences exist between LLM configurations for the task of digitizing damaged historical texts when preprocessing is held constant. Probanza allows an isolating model-level effects comparison in LLM-based historical digitization workflows.
Authors - Kushall Pal Singh, Vijay Kumar, Monu Verma, Dinesh Kumar Tyagi, Santosh Kumar Vipparthi Abstract - Hybrid enterprise environments spanning on-premises systems and public cloud services increase exposure to credential abuse, lateral movement, and misconfiguration-driven attack paths, motivating continuous verification and policy enforcement beyond perimeter assumptions. This paper presents an Azure-native, AI-enhanced Zero Trust framework that integrates identity-first enforcement (Microsoft Entra Conditional Access, Continuous Access Evaluation, and Privileged Identity Management), telemetry centralization (Microsoft Sentinel with UEBA), and an Azure Machine Learning classifier that outputs a probability-derived 0–100 trust score. Because identity policy engines consume bounded native signals, the framework binds external scoring to enforcement using SOAR automation that updates policy-targeted identity group membership via Microsoft Graph. A controlled A/B evaluation compares a static baseline (non-adaptive enforcement) with an adaptive mode (ML-in-the-loop scoring and automated score-to-policy binding) using MITRE ATT&CK-aligned scenarios: impossible travel sign-in, privilege escalation attempts via privileged activation workflows, and lateral movement via remote access/filesharing pathways. Quantitative outcomes are reported using median (P50) and tail (P95) time-to-detect, decision latency, and false-positive rate. To technically validate the adaptive control loop, the paper also reports an instrumented latency decomposition (trigger delay, playbook runtime, ML scoring call duration, and score-to-policy execution time) to show which components dominate end-to-end delay.
Authors - Karuppasamy E, Krithika V, Harish P, Pravinbaalaa V, Satheeskumar Abstract - The large online data consist of duplication and plagiarized contents. Due to Artificial Intelligence, data generation has become very easy. But, it may also lack an ethical data generation process. Hence, there is a need of validating plagiarism free data for authentic usage. In this research work, authors focus on word-level plagiarism detection methods in Natural Language Processing. The proposed method uses a comparative analysis of cosine similarity, Euclidean distance and Manhattan distance methods for word-level plagiarism detection for different n-gram sizes. The inculcation of n-gram size improved the accuracy compared to unigram based methods. The experimental results of the cosine similarity method outperform Euclidean and Manhattan distance methods by achieving an average accuracy range of 88 % to 92 % and 75 % to 80 % for direct plagiarism and lightly paraphrased text respectively. The future work is to identify reused images and visual contents.
Authors - Nagaraj.M, V. Balamurugan, Matam Veera Chandra Kundan, M.J. Mathesh, V. Vijairam Abstract - Academic credential fraud is a global issue that undermines institutional trust. Although blockchain solutions provide immutability, they are generally reactive, securing documents only after potential errors or fraud have already occurred. This paper proposes a proactive approach to prevent inconsistencies before degree issuance. We introduce a hybrid model that integrates Digital Twins as a preventive validation layer and Multichain as an immutable ledger. The Digital Twin operates as a virtual sensor during the degree creation process at Universidad El Bosque, simulating and validating academic, financial, and national exam data (Saber Pro) in real time; if inconsistencies are detected, “red flags” are triggered prior to issuance. Once validated, the degree’s hash is anchored to a Multichain network. A functional prototype developed in Python achieved a 100% detection rate of inconsistent records during testing. The pro-posed model transforms the academic certification process into a proactive, se-cure, and trustworthy ecosystem by combining preventive validation with block-chain immutability.
Authors - S. M. Mizanoor Rahman Abstract - Driver fatigue is a major cause of accidents on the road that generates major safety issues for drivers as well as passengers. Real-time detection of driver fatigue can help avert accidents by warning the driver about impending lapses in his attention. This paper proposes a real-time automated system for the detection of driver fatigue through observation of eye blink and yawn, which are major notifications for fatigue. The system uses a combination of deep learning models that give high accuracy levels in detecting a drowsy driver. Eye blink is detected by using a state-of-the-art object detection model that is trained to locate the open and closed states of the eyes accurately using correct coordinate mapping methods, giving an accuracy level of 96 percent. Yawning is detected using a combination of CNN and LSTM models that allow it to analyze spatial information as well as temporal information obtained through videos, giving an accuracy level of 98 percent. Both of these modules work on real-time camera inputs, which makes it possible for a constant monitoring of the alertness of the driver. Whenever the driver is found dozing off due to either excessive blinking or yawning, the system releases a real time auditory warning alert to caution the driver. The result of the experiments has justified that the capability of the combined system works well while operating reliability with low-latency responses in real time. This study has shown that the hybrid detection strategy with spatial and temporal analysis is quite effective in detecting a dozy driver on the road and developing such a system that can be helpful in increasing the safety of the road.
Authors - Kaniska D, Shreya J V, Srinidhi K, Sudhakar K S, Bagavathi Sivakumar P, Krishna Priya G Abstract - Language modeling of clinical text in healthcare pens down a necessitated context along with a high level of security measure for sensitive patient information. A few large language models have shown very good clinically related performance in documentation, summarization, and these models have been rolled out freely. Therefore, these models generate hallucinated or non verifiable outputs. Retrieval augmented approaches thus fix the problem by limiting the answer to the evidences retrieved. However, majority of the existing systems rely on the textual records only and the integration of the diagnostic imaging is not done systematically. In this paper, we put forward a retrieval grounded multimodal clinical modeling framework that unifies structured clinical text with imaging-derived contextual features. A patient specific vector indexing approach is used for isolated retrieval and a modality aware visual analytics approach turn imaging outputs into structured signals, hence language generation. The entire framework is performed fully offline, thus supporting privacy preserving deployment in resource-limited clinical settings. Experimental results show steady multimodal integration as well as the semantic consistency alignment between the retrieved evidence and the generated output.
Authors - Md Mahmudul Hoque, Md Kawser Islam, Md. Mamunur Rahman Moon, Abdullah Rakib Akand, Md. Hadi Al-amin, H.M. Azrof Abstract - The automatic recognition of virus particles in transmission electron microscopy (TEM) images remains a demanding task, primarily owing to strong inter-class similarity, scale variability, and pronounced class imbalance. In this study, several convolutional neural networks and transformer-based architectures were comparatively evaluated for the classification of 22 virus categories using the TEM virus dataset. All models were trained under identical preprocessing and optimization conditions, and imbalance effects were mitigated through a weighted crossentropy formulation. Performance was quantified using overall accuracy together with macro-averaged precision, recall, and F1 score. Among standalone models, the Swin Transformer achieved the highest accuracy (0.8831) and macro-F1 score (0.8444), followed by DeiT (accuracy 0.8669). Convolutional architectures exhibited comparatively lower balanced performance, with ResNet50 demonstrating substantial degradation (accuracy 0.5887) under imbalanced conditions. To exploit complementary representational properties, decision-level hybrid strategies were implemented. The performance-weighted hybrid attained an accuracy of 0.8831 and the highest macro-F1 score (0.8528), slightly surpassing the equal-weight hybrid configuration. These observations indicate that architectural heterogeneity contributes to improved inter-class balance without sacrificing overall predictive accuracy. Future work may explore scale-aware representations, feature-level fusion mechanisms, and expanded TEM datasets to further enhance robustness and generalization in virus identification tasks.
Authors - SunilKumar Ketineni, Preethi Kandukuri, Hruthik Sreeramaneni, Vivek Bojjagani Abstract - Phishing continues to pose a serious threat to digital security by ex ploiting human vulnerabilities to steal confidential data through deceptive online interactions. Traditional detection methods often fall short in identifying advanced phishing strategies. This survey presents a comprehensive overview of phishing detection techniques, with a strong focus on modern, multi-layered machine learning and deep learning-based solutions. The proposed layered framework includes four key stages: data collection and preparation, model training, detection and prediction, and explainability. In the first layer, email, URL, and metadata are collected and preprocessed for feature extraction. The second layer involves model training using both machine learning classifiers such as Random Forest, SVM, Naïve Bayes, and KNN and deep learning archi tectures like CNN, RNN, and LSTM. These models feed into the third layer where phishing is detected and classified. Finally, the fourth layer integrates Explainable AI (XAI) methods like LIME, SHAP, and Anchors to enhance model transparency and interpretability. This survey evaluates the effectiveness and limitations of each layer and highlights the need for explainable, scalable, and adaptive phishing detection systems.
Authors - K.Poorani, K Karan, R Seenivasan, V Ramkumar Abstract - Older email detection technologies have struggled to accurately iden tify malicious emails in the face of the latest techniques attackers use to compro mise victims. While modern solutions perform well in detecting malicious emails, they are not completely foolproof. As a result, malicious emails can still reach a user’s mailbox, necessitating measures to reduce potential harm. This study suggests transforming the decision-making processes of recent algorithms into a white-box model, enabling transparency in decision-making through Ex plainable AI. This is achieved by having the proposed model compute confidence level scores for each email, which users can use to exercise caution if a malicious email slips into their inbox.
Authors - Nazura Javed, Rida Javed Kutty, Muralidhara B L Abstract - The increasing availability of online information has made it easier to access diverse sources, but it has also introduced challenges in verifying the reliability and consistency of content. Conflicting statements across different sources often contribute to misinformation and make it difficult to establish factual accuracy. This study focuses on the problem of cross-document contradiction and inconsistency detection as a step toward improving fact verification in textual data. A two-stage pipeline is proposed in which semantically related sentence pairs are first retrieved from documents discussing the same event and then analyzed using Natural Language Inference (NLI) techniques to determine whether they express contradictory information. In contrast to conventional sentence-level contradiction detection, the proposed approach emphasizes document-level comparison to identify inconsistencies across independent sources. Two pre-trained transformer models, DistilBERT (DistilBERT-base-uncased) and RoBERTa (RoBERTa-base), are used for contradiction classification. The approach is evaluated on the SNLI dataset and the PHEME Rumor Dataset, which are widely used benchmarks for NLI and misinformation research. Experimental results show accuracies of 94.50% (F1 score 94.50%) on SNLI and 92.39% (F1-score 92.31%) on PHEME, indicating that the proposed framework is effective in identifying contradictions and supporting cross-document fact validation.
Authors - B.Purnachandra Rao, Gaurang Jinka Abstract - Distributed systems rely on data replication across multiple nodes to ensure high availability, fault tolerance, and scalability. While replication improves system reliability, it also introduces temporary inconsistencies between primary and replica nodes during data propagation. This phenomenon, commonly referred to as consistency drift, occurs when distributed nodes maintain slightly different states before synchronization is completed. As distributed infrastructures grow in scale and complexity, consistency drift becomes increasingly significant due to network latency, workload variability, and communication overhead between nodes. Traditional synchronization mechanisms typically rely on static replication intervals or fixed update propagation strategies that do not adapt effectively to dynamic system conditions. Such approaches may allow drift to accumulate before synchronization occurs, resulting in delayed consistency and inefficient resource utilization. Managing consistency drift therefore becomes a critical challenge in distributed computing environments where maintaining accurate and synchronized data states is essential. This research addresses the problem of consistency drift in distributed systems by examining the factors that contribute to state divergence among nodes and exploring mechanisms for dynamic drift management. The proposed framework focuses on monitoring system behavior, including workload intensity, network latency, and node communication patterns, to regulate synchronization behavior more effectively. By enabling adaptive synchronization strategies that respond to real time system conditions, the framework aims to reduce drift accumulation and improve overall data consistency across distributed clusters. Effective management of consistency drift ultimately enhances system reliability, operational stability, and performance in modern distributed computing platforms operating under dynamic workloads.
Authors - Suganya Moorthy, Jayakumar Kaliappan Abstract - Internet of Things (IoT) networks have grown really fast, which has increased the attack surface of cyber attacks by a big mar gin. However, the severely limited computational resources, the hetero geneous architecture, and incomplete or decentralized communications make the IoT environments very susceptible to intrusion attacks, in cluding Distributed Denial of Service (DDoS), spoofing, botnets, and data exfiltration attacks. Older signature-based intrusion detection sys tem (IDS) is not effective in detecting zero-day and dynamic threats. The paper will present a new machine learning-based intrusion detection system, which was developed with IoT networks in mind. The design proposed combines the characteristics of feature search, feature detec tion, and group classification model in order to increase the accuracy of detection as well as reduce the number of computations. Benchmark IoT intrusion datasets that have undergone experimental evaluations prove to be more effective in detection accuracy, false positive rates and scaling than the traditional IDS frameworks. Practical constraints that include the computational overhead of resource-constrained IoT devices, imbal ance of the dataset, and interpretability of the model are addressed. The directions of future research are lightweight federated learning systems, explainable AI system incorporations, and real-time adaptive threat in telligence systems to build better resiliencies of IoT security.
Authors - Konstantina Karathanasopoulou, Ioannis Vondikakis, Dimitris Georgiadis, George Dimitrakopoulos Abstract - Digital signatures are fundamental public-key cryptographic primitives used for message authentication and integrity. A message’s recipient must be able to validate that it comes from the reported sender and hasn’t been altered by anybody else. Pairing-based cryptography provides elegant and efficient mechanisms for constructing compact dig ital signature schemes. Inspired by isogeny structures on elliptic curves, we present a pairing-based digital signature system in this study. Our construction targets classical security settings and is analyzed under standard computational hardness assumptions related to bilinear groups and isogeny-based mappings. We demonstrate that the proposed ap proach attains “existential unforgeability under adaptive chosen-message attacks (UF-CMA)” within the random oracle model and address the construction’s soundness and security. Moreover, the scheme offers com pact public key and signature sizes, making it suitable for lightweight cryptographic applications.
Authors - Nirmaladevi J, Kanishka R, Kirthiga B, Lathikasri T R, Ranjani Shree R S Abstract - The vast implementation of cloud computing has uplifted the modern IT practices by improving scalability, flexibility, and budget efficiency. In contrast, there has been an increase in energy consumption, which results in carbon emissions. This happens because of overusage, overconsumption, overprovisioning, unused capacity, and inefficient data center management. These days, data centers act as the sole contributor to global greenhouse gas (GHG) emissions; therefore, sustainable cloud operations are essential in addressing this challenge. GreenOps, or green operations, defines the cloud deployment and operational practices that take place but also considers the environmental impact; it depicts energy-efficient infrastructure design, optimized resource usage, virtualization, and the integration of renewable energy resources. This survey presents a summary of green cloud computing, including the current trends, challenges, energy-aware scheduling algorithms, and optimization techniques for obtaining energy-efficient cloud deployment.
Authors - Pranaav Contractor, Sanika Ajgaonkar, Nishanth Ravichandran, Satishkumar Chavan Abstract - This paper examines the interplay between demographic factors and a newly developed behav ioral construct—modern investment curiosity—and how these elements collectively shape finan cial behaviors among higher education faculty. Drawing from survey responses of 145 educators situated in Kollam District, Kerala, India, the study applies descriptive statistical techniques alongside chi-square tests to evaluate four research hypotheses. The data reveals a predominantly risk-averse financial posture among participants, with post-retirement security ranking as the foremost financial goal and bank deposits serving as the dominant investment channel. Statistical testing shows no meaningful relationships between saving patterns and either household size or disability status. A statistically significant positive association emerges between investment cu riosity and ownership of equity or mutual fund products (χ² = 8.40, p < 0.01). Additionally, mar ital status demonstrates a significant relationship with investment curiosity (χ² = 5.28, p < 0.05), where unmarried faculty report higher curiosity levels. These observations are consistent with established frameworks including the Life-Cycle Hypothesis and the Theory of Planned Behav ior, positioning investment curiosity as a relevant psychological factor in financial decision-mak ing. The paper offers practical suggestions for institutional programming and identifies avenues for subsequent scholarly inquiry.
Authors - B.Purnachandra Rao, Gaurang Jinka Abstract - Distributed systems rely on data replication to ensure availability, fault tolerance, and scalability across multiple nodes in modern cloud environments. Replication enables systems to maintain continuity even when individual nodes fail or experience network disruptions. However, replication often introduces synchronization delays between primary and replica nodes, known as replication delay. These delays can cause temporary data inconsistency, stale reads, and increased response latency, degrading application performance and user experience. As infrastructures scale to larger clusters, communication overhead, network latency, and workload variability further amplify replication delays, making efficient synchronization increasingly challenging. Traditional replication mechanisms typically rely on static synchronization intervals or sequential update propagation strategies. These approaches fail to adapt to dynamic network conditions and fluctuating workloads, resulting in inefficient data propagation and delayed consistency across nodes. In large scale systems, such limitations may cause bottlenecks, reduced reliability, and inconsistent states during high workload periods or network congestion. Addressing replication delay is critical for maintaining reliability and consistency in distributed environments. Recent research emphasizes intelligent synchronization mechanisms capable of adapting to changing conditions. Adaptive synchronization strategies that monitor network latency, workload intensity, and node communication patterns offer improvements in replication efficiency. By enabling replication decisions that respond dynamically to system behavior, such approaches reduce synchronization delays and improve data consistency across clusters. Enhanced replication efficiency ultimately strengthens reliability, scalability, and operational performance in modern distributed computing platforms operating under variable workload conditions.
HOD, Department of Computer Application & Assistant Professor - Department of Computer Engineering, B.H.Gardi College of Engineering & Technology, Gujarat, India
Authors - Banda Rithija, MV Parth, Haripriya L, Skandan SS, Manju Abstract - The task of identifying Cryptographic Algorithms from ciphertext is a challenge within digital forensics and security auditing, when there is no knowledge of either the plaintext or the key used. As modern encryption algorithms increase in sophistication, their output becomes indistinguishable from random noise, rendering traditional pattern recognition techniques ineffective. This paper proposes a two-stage Hierarchical Cipher Classifiers, the first stage discriminates among three major Cryptographic Families: Symmetric, Asymmetric, and Hash; the second stage identifies the specific algorithm within those families in the context of six Modern Encryption Standards: Advanced Encryption Standard, Triple Data Encryption Standard, Blowfish, Rivest–Shamir–Adleman, ElGamal, and Secure Hash Algorithm 256-bit. In order to achieve high accuracy, we developed a hybrid feature space consisting of 167 attributes that included both Statistical and Transform- Domain Features.We incorporated SHapley Additive exPlanations (SHAP) into our classifiers to address the concern of the black-box nature of Deep Learning. Empirical Results indicate that the Hierarchical Classifier Structure has produced a substantial reduction in the rate of misclassifications compared to flat classifiers, offering a transparent and effective tool for automated cryptanalysis.
Authors - Megha Potdar Abstract - This paper delineates a compact microstrip patch antenna that operates within the frequency range of 6.5 to 8.5 THz and exhibits a resonance frequency of 7.344 THz. The antenna maintains a flat, compact shape that is well-suited for terahertz circuit integration and also incorporates circular and U-shaped patch modifications that enhance radiation efficiency, gain, and band-width. According to the simulation results, the device has a gain of 7.042 dBi, a VSWR of 1.1329, a low return loss of –24.109 dB, and a wide impedance bandwidth of 1.119 THz. It demonstrates consistent radiation patterns and effective impedance matching across the operating frequency range, indicating that the proposed design outperforms conventional THz patch antennas and rep-resents a highly efficient solution for high-speed terahertz communication, im-aging, and sensing applications.
Authors - Babatunde David Ikudehinbu, Atefeh Khazaei, Hamidreza Khaleghzadeh Abstract - In this paper, we outline the design and implementation of a novel electronic voting kiosk, dubbed BlockVote, which helps counter identity-related fraud and data tampering via biometric and blockchainbased approaches. The proposed system is a standalone embedded system running on an ESP32-S3 SoC-based microcontroller. The system includes a touchscreen display for user input and an optical fingerprint sensor for identity checking. This collected bio-data and voting selection are then integrated in such a manner that a secure transaction is created through cryptography. This is then sent through the Node.js gateway, which leads it to the secure Ethereum-based blockchain network. Such an application of physical verification technologies with blockchain technology ensures that the proposed voting system is more secure than the traditional e-voting machines or e-voting websites. Block-vote is a hybrid security system in which hardware-based verification techniques are combined with blockchain-based data management in a power-saving, compact format. The prototype has shown proof of its functional viability, its module-based construction, and its reliability, particularly in the field of embedded systems. The experimental results demonstrate the system’s high precision, low latency, and robustness against illegitimate use. The suggested framework demonstrates the practical feasibility of blockchain and biometric technology in the creation of trustworthy electronic voting systems that can be used in both urban and rural areas.
Authors - Deepika K M, Girish Gowda J, Ravi Honnalli, Nikhil S G Abstract - Cloud computing environments face increasingly sophisticated cyber threats that demand advanced detection mechanisms capable of identifying anomalous behavior in real-time. This study introduces an innovative hybrid temporal anomaly modeling system that integrates Autoregressive Integrated Moving Average (ARIMA) with Long Short-Term Memory (LSTM) networks, augmented by meta-learning fusion strategies. Our method solves the difficult problem of getting high recall rates (>95%) that are needed to keep operational efficiency while reducing missed critical threats. We tested five meta-learning architectures—Logistic Regression, Random Forest, XG-Boost, Gradient Boosting, and Neural Network—along with four rule-based fusion strategies on a large Cloud Anomaly Dataset with 249,595 samples taken from 11 virtual machines over 30 days. The Hybrid-RF (Random Forest) model had the best balance, with a recall of 95.75%, an accuracy of 10.59%, and an F1-score of 11.37%. This was much better than the average in the literature (75-85% recall). We set up the system as a production-ready Flask REST API on Google Cloud Platform, with response times of less than 200 milliseconds. This shows that it is possible to use real-time cloud security monitoring. Our findings demonstrate that metalearning fusion of statistical and deep learning temporal models yields enhanced threat detection capabilities relative to single-model approaches, achieving recall improvements of 10-20% over state-of- the-art methods while adhering to real-time performance constraints.
Authors - Anup Bhitre, Saurabh Nimje, Utkarsha Pacharaney, K. T. Reddy Abstract - Cervical Spinal Stenosis (CSS) is a progressive spinal disorder caused by narrowing of the spinal canal in the neck, potentially leading to severe neurological damage if undiagnosed. Due to rising CSS cases and the limitations of manual MRI analysis—such as subjectivity, time consumption, and inter-observer variation—there is a growing need for automated, reliable diagnostic tools. This study evaluates and compares four AI models—CNN, ResNet50, SVM, and Random Forest—using 1,200 T2-weighted MRI images processed through normalization, segmentation, and augmentation. Performance was measured using accuracy, precision, recall, F1-score, and AUC-ROC. ResNet50 achieved the highest accuracy (93.6%) and AUC-ROC (0.97), demonstrating superior diagnostic performance. SHAP was used for interpretability, highlighting spinal canal diameter and ligamentum flavum thickening as key diagnostic features. The findings confirm that deep learning, especially ResNet50, offers a scalable, interpretable, and clinically effective method for early CSS detection.
Authors - C Ashik Poojary, Chirag B Jogi, Sanath Shetty, Sandhya P, Mahitha G Abstract - Image inpainting plays an important role in restoring and reconstructing degraded or damaged images by filling in missing regions. This work proposes a gated convolutional neural network based on a U-Net architecture to achieve perceptually accurate and high-resolution restoration. The model was trained on a large-scale dataset of over 20,000 images generated with the CelebA dataset along with extensive enhancement using artificial damages such as scratches, cracks, random patches, blurring, sepia-toning, and grayscale degradation. The proposed method performs two phases of restoration: context-aware inpainting, followed by resolution enhancement while preserving both global structure and local texture. Quantitative metrics such as PSNR and SSIM were evaluated, and qualitative comparisons demonstrate faithful texture synthesis and tone-consistent fills across color, grayscale, and sepia domains.
Authors - D.Nagaraju, Padinjaroot Monesh Raj, G .Likith, K .Kavitha, Thella Muni Chandrika Abstract - This Gesture recognition technology is studied in this article as it pertains to controlling music wirelessly via a music controller de-vice. The gesture recognition system highlighted in this study is an innovative advancement in this area. In addition to providing the user with an easy-to-use interface for controlling the volume of music with hand motions, This provides a no-contact way to play percussion instruments where users can play from anywhere, either they have good eyesight or not! In addition to providing users with visual experience while using the application, the application also provides users with 3D graphics and animations that dynamically reflect the user’s movement on the screen as they create percussion music through the application. The entire system is created using JavaScript and thus, is completely platform-independent and will work on any recent web browser.
Authors - Aditya Ajitrao Kulkarni, Mayuri Shelke, Saurabh Babasaheb Gonte, Kalpak Sanjay Kedari, Parikshit Balasaheb Jadhav Abstract - Image inpainting is a basic problem in image restoration that focuses on recovering the missing or damaged areas of an image in a visually plausible and semantically consistent way. However, in practical image restoration tasks like historical photo restoration, images are often degraded by complex damages like cracks, scratches, fading, stains, and tone changes. Conventional image restoration methods relying on interpolation or diffusion have limitations in restoring high-frequency details and global semantic information. This paper presents a gated convolutional neural network with a U-Net structure for effective image inpainting and restoration with resolution enhancement. The proposed network is trained on a large-scale dataset of more than 20,000 synthetically degraded images created from the CelebA dataset, considering various damage patterns like scratches, cracks, random occlusions, blurring, grayscale conversion, and sepia tone transformation. The image restoration process involves two steps: context-aware image inpainting and resolution refinement. The proposed framework is extensively evaluated using PSNR and SSIM metrics for its effectiveness in color, grayscale, and sepia image restoration.
Authors - G Venkata Suresh Reddy, Immanuel Anupalli, P.Sudheer Abstract - Solar photovoltaic (PV) systems require robust and intelligent problem detection systems to guarantee they continue producing energy effectively as they gain traction as a renewable energy source. In order to detect various defects in photovoltaic (PV) systems operating under nonlinear and noisy conditions, this research presents a data-driven fault classification framework that employs machine learning techniques. Electrical data from photovoltaic (PV) panels, including current-voltage (I-V) and power-voltage (P-V) curves recorded in three distinct operating circumstances (Healthy, Shading, and Open-Circuit), formed the basis of the dataset used for training and testing the model. For each condition, crucial electrical characteristics have been used to characterize the system's electrical behavior, including open-circuit voltage, short-circuit current, maximum power point voltage and current, fill factor, and a handful of statistical statistics. Logistic Regression, Naïve Bayes, and k-Nearest Neighbors (KNN) are the three supervised machine learning methods that were employed to detect various errors. Each model was fine-tuned using hyper parameter tweaking and k-fold cross-validation. The classification performance in the comparative performance analysis was greatest for Logistic Regression (96.09% accuracy, 96.25% precision, 96.49% recall, and 96.36% F1-score). Second place went to the KNN model, which had a 95.47% accuracy rate. In contrast, the Naïve Bayes model maintained its reliability, with an accuracy rate of 94.13%. This demonstrates that it is still effective when dealing with nonlinear data that contains noise. According to the overall results, many machine learning algorithms, especially Logistic Regression, do a great job of finding PV problems in real time. The suggested framework is both efficient and useful for real-world PV monitoring systems because it just needs to measure electrical parameters that are easy to get (I-V and P-V data). Using this strategy for preventative maintenance makes solar systems more reliable and increases their production, which in turn cuts down on power losses.
Authors - Premanand Ghadekar, Utkarsh Patil, Niraj Ukare, Vansh Bhatt, Rohan Uplenchwar, Shreya Sidnale Abstract - Traditional multi-agent communication systems rely on fixed security protocols and static message processing pipelines, leaving them vulnerable to advancing cyberattacks and dependent on expensive infrastructure. This paper introduces a Secure Multi-Agent Communicational Protocol designed as a lightweight, affordable framework for small and medium-scale systems to communicate safely without enterpriselevel costs. The current setup depends heavily on predictable session keys, making systems prone to impersonation, replay attacks, token alterations, and man-in-the-middle interceptions. This framework stimulates agentto- agent interactions through three primary components: a predictive security model, a dual-token authentication mechanism, and a protocolaware attack engine. The infrastructure utilizes WebSocket connections integrated with Redis Pub/Sub for real-time messaging. A dynamic session key generation process works alongside a rotating refresh-token system, ensuring that even if a session key is compromised, attackers still require a valid refresh token. The predictive component features a Protocol-Aware XLNet model with a dual-thread structure to examine message sequences and statistical irregularities. A fusion layer integrates these analyses, reporting a Dual-Thread Consistency Score of 0.87 and a 31% gain in early-warning capability. Experimental evaluations demonstrate 93.5% violation sensitivity, 91.7% replay detection accuracy, and 89.3% attack-type classification accuracy. This approach enables timely identification of replay incidents, interceptions, and protocol tampering. Additionally, an independent XGBoost model filters fraudulent links. These enhancements provide substantial gains in early warning capabilities and consistent classification accuracy across various attack categories.
Authors - Md. Mijanur Rahman, Mst. Tasnia Fahmida, Shithi Bhowmick, Md Tanzid, Zubaed Hossain, Zaid Bin Sajid Abstract - Halal industry has become a major foundation of the Islamic global economy, where religious adherence, consumer trust, and supply chain transparency are becoming increasingly critical. In spite of the existing systems, halal poultry supply chains are persistently confronted by the problem of fragmented stores, dependence on centralized databases and a restricted real-time traceability system. These limitations present the greatest risks of mislabeling and non-compliance of regulations. The performance of existing blockchain-IoT traceability systems becomes increasingly doubtful as they grow more complex because of scalability issues and lack of integration with halal regulatory systems, as well as automatic compliance monitoring. This paper suggests a solution to these issues: a Blockchain-IoT Integrated Halal Poultry Traceability System (BIHPTS) implemented on Hyperledger Fabric. The Proposed system Integrates IoT telemetry for constant data gathering, off-chain storage using InterPlanetary File System (IPFS) to counteract the expansion of on-chain storage, and a dual-governance, rule-based structure based on smart contracts. This framework ensures that distributed, immutable, and secure records are accessible together with the supply chain. The system validates the use of halal feed, authorized slaughtering processes, transportation constraints, and environmentally acceptable threshold limits.
Authors - Indrajitsinh J. Jadeja, Nirav P. Maniar Abstract - Agriculture plays a vital role in ensuring food security, yet traditional crop selection and yield estimation practices often fail to account for complex interactions among soil, climatic, and environmental factors. Recent advances in machine learning (ML) have shown significant potential in addressing these challenges by enabling data-driven decision support for farmers. This paper presents a comprehensive review of machine learning–based crop recommendation and yield prediction techniques, focusing on their effectiveness in improving agricultural productivity and sustainability. The study analyzes various supervised and ensemble learning models applied to soil quality parameters such as nitrogen, phosphorus, potassium, pH, moisture, and climatic variables. Emphasis is placed on multimodal data integration, highlighting how the fusion of soil, weather, and remote sensing data enhances prediction accuracy. The review also discusses current limitations, including data scarcity, model generalization, and real-time deployment challenges, particularly in resource-con-strained farming environments. Finally, the paper identifies key research gaps and future directions toward developing robust, scalable, and intelligent agricultural decision-support systems.
Authors - Thony Enechi, Tevin Moodley Abstract - The legal profession is in a transformative era, driven by technological advancement and global shifts in businesses. This study aims to explore key factors influencing legal sustainability performance in Indian Law firms, with A focus on Environmental, Social, and Governance (ESG) practices through an Artificial Intelligence (AI)-Enabled computational intelligence perspective. While ESG frameworks are widely adopted across industries, their application in the legal sector remains limited due to overreliance on qualitative assessment and the absence of a computational decision mechanism. Considering legal infrastructure as a complex socio-technical system, this research adopts digitization and AI to enhance ESG-based accountability and governance. The pro-posed framework applied the Fuzzy Delphi Method to aggregate 30 legal experts’ knowledge and the Fuzzy DEMATEL to computationally model interdependencies among ESG performance factors. This enables systematic identification of critical sustainability drivers and their causal relationships. The study contributes a computational intelligence-driven sustainability framework de-signed for the legal industry, offering both theoretical and practical insights for technology-enabled ESG implementation. The proposed intelligent system sup-ports informed decision-making and strengthens environmental law enforcement and accountability within Indian law firms. Future research guidelines are also outlined
Authors - P.Srikanth, Immanuel Anupalli, P.Sudheer Abstract - Halal industry has become a major foundation of the Islamic global economy, where religious adherence, consumer trust, and supply chain transparency are becoming increasingly critical. In spite of the existing systems, halal poultry supply chains are persistently confronted by the problem of fragmented stores, dependence on centralized databases and a restricted real-time traceability system. These limitations present the greatest risks of mislabeling and non-compliance of regulations. The performance of existing blockchain-IoT traceability systems becomes increasingly doubtful as they grow more complex because of scalability issues and lack of integration with halal regulatory systems, as well as automatic compliance monitoring. This paper suggests a solution to these issues: a Blockchain-IoT Integrated Halal Poultry Traceability System (BIHPTS) implemented on Hyperledger Fabric. The Proposed system Integrates IoT telemetry for constant data gathering, off-chain storage using InterPlanetary File System (IPFS) to counteract the expansion of on-chain storage, and a dual-governance, rule-based structure based on smart contracts. This framework ensures that distributed, immutable, and secure records are accessible together with the supply chain. The system validates the use of halal feed, authorized slaughtering processes, transportation constraints, and environmentally acceptable threshold limits.
Authors - Om Sarvaiya, Maulik Shah Abstract - Brain-computer interface systems can help people who are unable to communicate due to paralysis or severe motor disabilities. In this work, we im plemented an EEG-based P300 speller that allows users to select characters by focusing on a visual stimulus.The system functions by means of the P300 signal that appears when the user identifies their target character. We developed a com plete pipeline that includes feature extraction, machine learning model classifi cation, and preprocessing of EEG data. The system was tested using the BNCI Horizon 2020 P300 dataset, and the results showed that character selection accu racy ranged from 82% to 86%.Random Forest performed better compared to other classifiers in our implementation. The system was designed in a modular way so that future improvements can be added easily. This implementation shows that EEG-based communication systems can be developed using accessible tools and can support basic communication for people with severe motor impairments.
Authors - Anita Anand, Shivangi Surati Abstract - Artificial intelligence has transformed the predictive analysis of electoral processes by enabling a deeper understanding of candidates' preferences and behaviors through digital data. This study aimed to develop and compare deep learning models for sentiment analysis based on aspects of Ecuadorian electoral opinions. The Cross-Industry Standard Process for Machine Learning methodology was adopted. A dataset of Spanish-language comments collected from YouTube and Twitter, associated with presidential candidates, was constructed. Three classification architectures were implemented: BETO, BETO with Long Short-Term Memory (LSTM), and BETO with Bidirectional LSTM (BiLSTM). The results show that the hybrid architecture BETO with BiLSTM achieves the best performance, with an F1-score of 84.51% and precision of 85.09%, surpassing the other architectures and reaching levels comparable to international studies that employ BERT and hybrid models in political analysis. This model was integrated into an interactive dashboard that allows users to visualize the distribution of positive, neutral, and negative sentiment by candidate, making it a valuable tool for analyzing digital public opinion trends in Ecuador. Future work includes incorporating data balancing techniques, expanding the volume and time frame of comments, integrating demographic and geographic variables, and exploring more advanced models based on transformers and Large Language Models.
Authors - Valeria Alexandra Yunga Manzanillas, Pablo Andres Figueroa Juca, Nelson Oswaldo Piedra Pullaguari Abstract - In the digital era, the global emergence of COVID-19 has necessitated the development of transformative technology to redefine how we interact with and manage public health crises. To effectively slow mortality rates, this work emphasizes the critical requirement for accurate and rapid diagnostic methods that enable early-stage disease detection. Drawing on the necessity for more efficient systems, this paper proposes a high-fidelity diagnostic framework utilizing Convolutional Neural Networks (CNN), Deep Neural Networks (DNN), and Transfer Learning algorithms. Implemented through a TensorFlow-based 3-class classification strategy, the system was evaluated using a dataset of 817 chest X-ray images (comprising COVID-19, pneumonia-affected, and normal images). The experimental results yielded accuracies of 93.29% for the CNN, 92.68% for the DNN, and a superior 97.56% for the Transfer Learning approach, which outperforms the state of the art methods. These results demonstrate that such high-fidelity computational models provide the conceptual clarity and robustness needed to revolutionize traditional diagnostic methods. By providing instant feedback and a meaningful interpretation of complex medical imagery, the proposed system allows clinical practitioners to achieve precise detections in significantly less time.
Authors - Shahin Makubhai, Ganesh R Pathak, Pankaj R Chandre, Raju Gurav Abstract - Artificial intelligence (AI)–driven personalization is increasingly embedded in digital customer journeys to enhance relevance and efficiency. However, such systems simultaneously raise concerns related to surveillance, autonomy, and trust, particularly in data-intensive service environments. This study investigates how AI personalization intensity and recommendation quality influence perceived surveillance, perceived autonomy, trust, customer experience, and loyalty within AI-enabled hotel journeys. Using a quantitative approach, survey data were collected from 200 hotel guests who interacted with AI-based personalization features. The proposed model was tested using Partial Least Squares Structural Equation Modeling (PLS-SEM). The findings reveal that AI personalization in-tensity and recommendation quality significantly increase perceived surveillance and perceived autonomy, while perceived surveillance plays a central role in trust formation. In contrast, customer experience and loyalty are weakly explained by AI personalization alone. The study contributes to ICT research by demonstrating that AI-driven systems primarily shape cognitive and perceptual mechanisms rather than directly driving behavioral outcomes, highlighting the importance of human-centered and ethically designed AI personalization in digital service con-texts.
Authors - My-Phuong Ngo, Hoang-Thanh Ngo, Loan T.T. Nguyen Abstract - Automated classification of enterprise support tickets is a foundational natural language processing (NLP) task for intelligent service management systems. While trans-former-based models have achieved strong performance on benchmark datasets, their behavior under real-world enterprise constraints—such as class imbalance, do-main shift, calibration reliability, and retraining cost—remains insufficiently under-stood. This paper presents a comprehensive and reproducible NLP framework for enterprise ticket classification, systematically evaluating classical machine learning baselines, full fine-tuning of transformer encoders, and parameter-efficient fine-tuning using Low-Rank Adaptation (LoRA). Extensive experiments are conducted on a large enterprise-style ticket corpus using time-based splits, out-of-domain testing, imbalance stress, calibration analysis, inference latency, and ablation studies. Results show that transformer-based models substantially outperform classical baselines, while LoRA achieves comparable performance to full fine-tuning with significantly reduced training overhead. The proposed evaluation protocol and findings provide practical guidance for deploying robust and efficient NLP systems in enterprise environments.
Authors - Hiren Darji, Devarsh Chandiwade, Tushar Panchal, Meenakshi Chandra, Swapnil Gharat Abstract - Strategic decision making in time dependent systems often involves complex trade-offs between short-term performance gains and long-term degradation effects. Designing effective strategies in such environments requires accurate modelling of performance evolution and careful evaluation of discrete intervention decisions. This paper presents an intelligent strategy simulation framework that integrates data-driven modelling and predictive analytics to evaluate decision strategies under progressive performance degradation. Using high-frequency Formula 1 telemetry data as a representative case study, the proposed framework models lap-time evolution as a function of degradation age and operational context. Both regression-based models and neural network predictors are employed to estimate performance trends, enabling comparison between linear baselines and nonlinear learning approaches. A simulation engine is then used to evaluate multiple strategic scenarios by incorporating degradation dynamics and discrete intervention penalties, allowing quantitative assessment of alternative decision policies. The framework enables direct comparison of strategy outcomes through cumulative performance metrics and visual race progress analysis, providing interpretable decision support. Experimental results demonstrate that both degradation rate and decision timing have a signiicant impact on overall system performance. Furthermore, neural network models consistently outperform linear regression in capturing non-linear degradation behaviour, particularly during extended operational phases. Although demonstrated using motorsport telemetry data, the proposed approach is generalizable to a wide range of real world optimization and decision-support problems involving degradation, uncertainty, and staged decision points.
Authors - Prabhat Kumar Gupta, Perumal T, Karthick Pannerselvam Abstract - Generation Large language models, as well as retrieval-augmented generation (RAG), are highly performing on semantic queries, but with considerable latency as they require embedding computation, a vector similarity search, and generation at inference time. Such delays make them inappropriate in time-sensitive and domain-specific retrieval activities. In this paper, the Hierarchy Latent Retrieval Model (HLRM) which is a deterministic architecture will be introduced and able to answer semantic queries in O(1) constant time. HLRM unites hierarchical semantic routing and semantic hashing so that pre-validated units of knowledge can be directly illuminated without the need to search methods or language model informing of their existence at run time. All computationally expensive processes are done offline, which means that embedding processes or vector databases are not needed to run a query. Milliseconds-response time with very high exact-match accuracy is proved under experimental assessment on an orderly institutional knowledge environment. The findings suggest that HLRM offers an alternative of fast, interpretable, and reliable systems to the generative retrieval systems in non-random settings where precision and response latency is paramount.
Authors - Khedkar Aboli Audumbar, Uday Pandit Khot, Balaji G. Hogade Abstract - Malicious or compromised internal users can act like normal users with valid login credentials and thus become difficult to detect. As a result of their similarity to normal users, traditional methods of detecting intrusions, have difficulty identifying the subtle and changing behaviors of malicious insiders. This paper introduces a comprehensive User and Entity Behavior Analytics (UEBA) framework to help detect malicious insiders. It works by analyzing activity logs generated by the enterprise. Further it performs data cleaning and feature engineering; creating behavioral profiles for each user based upon the attributes of time, environment, and behavior. These profiles are used to model normal interaction patterns and with the DBLOF algorithm, an outlier score for each profile is created. The outlier score indicates whether or not a given user’s behavior has deviated from normal. In order to make the proposed system adaptable to changing environments over time, it utilizes deep learning algorithms to detect changes in behavior and to increase the accuracy of anomalous behavior detection. The proposed system also enables the ingestion of real-time data, the evaluation of risk, and the display of alerts in a visual format. Thus, providing the scalability and operational performance required to support large-scale organizations. Overall, the proposed system represents a reliable, modular, and understandable UEBA framework. It is capable of accurately detecting malicious insider threats and representing an efficient method for proactively mitigating risks through security operations within enterprises.
Authors - Poonam Chaudhary, Rita Chhikara, Nupur Prakash Abstract - This work addresses the challenge of Isolated Sign Language Recognition (ISLR) on mobile and edge devices, where computational resources, memory, and energy budgets are severely constrained. Existing approaches based on pixel-level three-dimensional convolutional neural networks are computationally expensive and sensitive to background variations, while recurrent models such as Long Short-Term Memory networks suffer from a sequential processing bottleneck that limits parallel execution on modern hardware accelerators. To overcome these limitations, this paper proposes a hybrid Adaptive Graph Convolutional Network (A-GCN) and Transformer architecture that decouples spatial and temporal modeling of skeletal sign representations. The A-GCN employs a learnable adjacency matrix to capture dynamic and semantically meaningful spatial relationships between skeletal landmarks, while the Transformer encoder leverages parallel self-attention to model long-range temporal dependencies without recurrence. Experimental evaluation on the 250-class Google Isolated Sign Language Recognition dataset demonstrates a Top-1 accuracy of 78.90%, outperforming a Bi-LSTM baseline by 6.96%. In addition, the proposed model achieves a throughput of 400.55 frames per second with a latency of 2.50 ms on accelerator hardware and maintained real-time performance on consumer-grade devices. These results demonstrate that landmark-based, parallel architectures enable accurate, real-time, and privacy-preserving sign language recognition suitable for deployment on standard mobile devices.
Authors - Subin Simon, Prathilothamai M Abstract - Deep learning has shown significant potential in medical image classification; however, a systematic comparison of deep feature extraction strategies for multi class diabetic eye disease assessment remains limited. This study presents a comprehensive comparative analysis of seven deep learning architectures, including conventional CNN, pretrained VGG16, Vision Transformer (ViT), Conformer, hybrid CNN ViT, and attention-augmented variants incorporating Squeeze-and-Excitation (SE) and Convolutional Block Attention Module (CBAM). All models are evaluated under a unified preprocessing and training framework to ensure fair performance comparison.The investigation focuses on analyzing how different architectural paradigms capture discriminative local and global retinal features relevant to disease classification. Extensive experiments are conducted on public fundus image datasets using standard evaluation metrics, including accuracy, precision, recall, and F1-score. Experimental results demonstrate that hybrid and attention-integrated architectures outperform standalone CNN and transformer models. In particular, the Conformer architecture achieves the best overall performance, reaching approximately 91% classification accuracy in the four class setting (Diabetic Retinopathy, Glaucoma, Cataract, and Normal), while the CNN ViT model attains approximately 89% accuracy.These findings highlight the effectiveness of combining convolutional operations with global self-attention mechanisms for robust and discriminative feature extraction in automated diabetic eye disease classification.
Authors - M.Murugesen, Priyanka P Abstract - Deep learning–based medical image models have achieved expert level performance in GPU-based research environments [1–3]. However, relia ble deployment in real clinical systems remains challenging due to constraints related to power consumption, hardware stability, and long-term operation. While prior studies have focused on improving model architectures or hardware accelerators [4,5], relatively limited attention has been devoted to systematical ly managing the transition from GPU-based development to NPU-based de ployment environments. This study formulates the GPU-to-NPU transition as an independent deployment research problem. Rather than proposing a new model architecture, we focus on preserving functional equivalence when trans ferring a validated GPU-trained medical vision model to an NPU-based infer ence environment. The proposed framework consists of reference model fixa tion, intermediate representation (IR)-based conversion [13–15], operator com patibility management, inference pipeline alignment, and output-level function al equivalence validation. The framework is evaluated through deployment of a ResNet-50–based pa thology classification model on a commercial ATOM NPU platform. Experi mental results demonstrate a 99.1% agreement rate (991/1,000 samples) be tween GPU-based and NPU-based inference outputs, confirming consistent de cision behavior despite architectural differences. These findings indicate that deployment reliability depends more on execution environment control and preprocessing alignment than on model architecture modification. By redefining deployment as a structured research problem, this work pro vides a reproducible methodology for translating research-grade medical AI models into energy-efficient NPU inference systems under practical clinical constraints.
Authors - Jayanthi J, P.Uma Maheswari, S.Uma Maheswari, Arun Kumar, Karishma V R Abstract - The rapid migration of artificial intelligence from cloud platforms to edge-based Internet of Things environments has intensified the demand for transparent and trustworthy decision-making under severe resource constraints. While edge intelligence enables low-latency and privacy-preserving analytics, the opacity of deployed models limits user trust, accountability, and regulatory acceptance. Existing explainability techniques largely assume cloud-level resources, making them unsuitable for real-time and energy-limited edge deployments. In order to close this gap, this work develops an interpretable intelligence framework that is resource-aware and adaptable, specifically designed for limited IoT systems. The suggested approach integrates interpretability directly into the decision-making process, allowing for the generation of faithful, lightweight explanations in addition to predictions while dynamically adjusting to operational context and runtime restrictions. Further balancing local responsiveness with system- level insight aggregation and secure governance is achieved through hierarchical explanation control. Transparency, efficiency, and scalability are all in line with the framework's treatment of explainability as a fundamental system capacity. The study shows that adaptive, deployment-aware explainability can greatly improve edge intelligence's operational reliability and trustworthiness. These insights establish a foundation for building accountable and interpretable AI systems in real-world IoT environments.
Authors - Rimon Kumer Roy, Jannatul Ferdous, Kazi Lutfur Nahar Mithila, Sabbir Islam, Mohammad Zahid Hassan, Sadah Anjum Shanto Abstract - Early identification of ophthalmic disease is critical to pre serve eyesight. We present DeepEye, a stacking-ensemble framework for multi-disease classification on the Eye Disease Image Dataset (EDID, Mendeley Data). After standardized preprocessing and augmentation, f ive architectures ResNet50, VGG16, DenseNet121, EfficientNet-B4, and Vision Transformer were trained and evaluated. The final ensemble in tegrates the top base models with a logistic regression meta-learner op timized via hyperparameter tuning. On a held-out test set, DeepEye achieves 91.34% accuracy and AUC of 0.9965, outperforming all con stituent models and exhibiting stable gains across cross validation folds. Model transparency is supported with Grad-CAM visualizations that lo calize disease-relevant regions, enhancing clinical interpretability. These results indicate that combining convolutional and transformer backbones within a tuned stacking framework yields a high-accuracy, explainable approach for automated eye disease detection in healthcare settings.
Authors - Nguyen Thi Hoi, Vu Thi Anh Hong, Dang Thi Anh Tho, Dang Thuy Linh, Nguyen Khanh Linh Abstract - The increasing use of renewable energy sources has made the integra tion of Flexible AC transmission system (FACTS) devices into contemporary power systems, an important area of research. The function and effectiveness of FACTS devices in enhancing power quality and preserving stability in traditional power systems and those that significantly count on renewable energy source are comprehensively examined in this study. Variability and unpredictability brought about by renewable energy sources can negatively impact the voltage profile, particularly at high penetration levels. Devices from the Flexible AC Transmis sion System, like the Thyristor-Controlled Series Capacitor (TCSC) & Static Var Compensator (SVC), provide efficient ways to improve system stability and dy namically regulate voltage. This paper investigates a coordinated control strategy of SVC and TCSC for improving voltage profiles in a transmission network with high renewable energy integration. Using an IEEE-14 bus test system, various scenarios of renewable penetration are simulated to analyze the performance of coordinated FACTS operation. The findings show that the suggested coordinated control improves overall system dependability and power transfer capabilities in addition to reducing voltage variations and reactive power imbalances. The study highlights the importance of optimal placement and coordinated tuning of FACTS devices as a cost effective solution for enabling secure and stable opera tion of renewable-rich power grids.
Authors - Josue Piedra, Nelson Piedra Abstract - Accurate crop production forecasting is essential for sustainable agricultural planning, effective resource management, and long-term food security. Conventional statistical and regression-based models often fail to capture the complex, nonlinear relationships that exist among agro-climatic variables, soil characteristics, and crop yield [1]. To address these limitations, this paper proposes an agentic artificial intelligence (AI)–based framework for crop production analysis that integrates autonomous decision-making with machine learning and deep learning techniques. The proposed framework utilizes agro-climatic and soil parameters such as temperature, humidity, soil moisture, cultivated area, and seasonal information to model crop production behaviour. Three predictive approaches— Linear Regression, Random Forest, and CNN–LSTM—are implemented and evaluated within the agentic framework using Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and the Coefficient of Determination (R2) as performance metrics. Experimental results demonstrate that the Random Forest model significantly outperforms the other models, achieving an RMSE of 0.56, MAE of 0.31, and R2 value of 0.96. These findings highlight the effectiveness of agent-driven machine learning systems in accurately modelling agricultural data and supporting intelligent decision-making for crop yield optimization.
Authors - Priyanka Halder, Anupam Sinha Abstract - This study analyzes the extent to which credibility from influencers impacts consumers' buying behavior. The focus will be on how the intention to buy impacts this relationship as the problem is being analyzed in the context of social commerce on TikTok. The study is developed within the framework of Source Credibility Theory which suggests that consumers’ perception and consequent behavior are influenced by the perceived degree of the spokesperson’s Attractiveness, Trustworthiness, and Expertise. The study employs a quantitative explanatory methodology. A purposive sampling technique was used to collect data from a sample of 100 active TikTok users who follow the provided influencer. The analyzed relationships will be quantified using Structural Equation Modelling with Partial Least Squares (SEM-PLS). The research results concluded that influencer credibility increases the intention to buy, but does not increase the purchasing decision. The intention to buy completely mediates the relationship between influencer credibility and purchasing decision. This demonstrates that influencer credibility is a significant factor in the intention to buy behavior, but it is the intention that is essential in order to convert the persuasive influence into actual buying behavior. The study contributes to digital marketing communication research by extending Source Credibility Theory to the context of short-video social commerce platforms.
Authors - Sarthak, Utkarsh Kumar Singh, Ankur Yadav, Aarushi Sharma, Samarth Saxena, Vaishnavi Kumari Singh, Anisha Kumari Abstract - Cervical cancer prediction using machine learning is often limited by class imbalance, dataset variability, and insufficient control of false positive rates. While many existing models report high accuracy, they frequently fail to maintain a clinically appropriate balance between sensitivity and specificity, particularly across datasets with different sizes and feature structures [1]. Models trained on large clinical risk-factor datasets may not generalize well to smaller behavioral datasets, and recall-oriented optimization can significantly increase false positives. This study proposes a false positive–optimized ensemble framework combining behavioral and clinical risk factors and analyzes its performance across two heterogeneous datasets. Threshold tuning and ensemble techniques, including soft voting and stacking, are employed to increase minority-class detection while retaining specificity. Results indicate that independent classifiers show dataset-dependent instability, with trade-offs between recall and false positive control. However, ensemble methods provide more consistent accuracy, precision, recall, and F1-score across datasets. The findings demonstrate that threshold optimization combined with ensemble learning improves cross-dataset robustness and supports more clinically reliable cervical cancer prediction.
Authors - Sowmini Devi Veeramachaneni, Yaswanth Gavini, Arun K Pujari Abstract - Combining Particle Swarm Optimization (PSO) with gradientbased local search enhances efficiency in solving complex optimization problems. Existing hybrids often use fixed switching rules, causing premature convergence orwastedcomputation.We present an adaptive PSO–gradient descent method where stagnation detection triggers local refinement only when needed. Adam is employed for local search without extra parameters. Tests on seven benchmark functions show the approach achieves strong or competitive results on challenging cases while ensuring robust convergence on simpler ones.
Authors - Amna Ali, Rida Hijab Basit Abstract - With the advent of agentic Artificial Intelligence, systems have demonstrated significant ability to understand data and respond to changing business environments without human assistance. Agentic AI is largely being used in supply chain management (SCM) systems for automating the supply chain tasks - demand forecasting and planning, logistics and transportation optimization, supplier management and risk reduction, and warehouse management. Use of agentic AI in SCM represents a drastic shift from traditional rule-based systems to automated goal-driven systems that operate without human intervention. Such systems are supported by Natural Language Processing and deep learning models which have made the supply chain processes much easier, efficient and less prone to error. The organizations that have incorporated agentic AI in their business processes have reported operational efficiency and cost effectiveness. However, such advancements in technology have raised concerns related to privacy ethics and data security. In this paper, we have conducted the systematic review of the existing research on the usage of Agentic AI in Supply Chain Management. The paper discusses characteristics of agents in SCM, different types of architectures and analyses the limitations and challenges related to the usage of AI agents in supply chain management.
Authors - Bikram Bikash Das, Chukhu Chunka, Pantha Kanti Nath, Nippu Kumar Abstract - Credit card transaction analysis is challenged by severe class imbalance with evolving spending behavior and large-scale financial data. Many existing fraud detection approaches rely on supervised learning and assume stable fraud labels, limiting robustness under changing fraud prevalence. This study presents a large-scale, multi-year credit card trans action dataset stored in partitioned Parquet format and conducts a systematic comparison of classical machine learning, supervised deep learning, and unsupervised deep learning models for customer spend ing behavior analysis. An exploratory behavioral analysis characterizes spending heterogeneity, temporal regularities, and channel and category variations. Supervised sequence models based on LSTM and CNN ar chitectures are evaluated alongside unsupervised sequence autoencoders and hybrid detection pipelines across fraud rates ranging from 2-12%. To ensure fair evaluation under extreme imbalance, models are assessed using ranking-based metrics under fixed alert budgets, including pre cision–recall area under the curve and recall-at-K. A hybrid of Autoen coder and LSTM architectures achieves the highest performance for large systems. An integrated XAI module is introduced to derive important features providing interpretable insights.
Authors - Dao Khanh Duy, Nguyen Hoang Hieu, Karn Nasritha, Khanista Namee Abstract - This research examines the effectiveness of four state-of-the art transformer-based models (LaBSE, mBERT, XLM-RoBERTa, and mT5) for cross-lingual sentiment analysis of railway passenger feedback. We focus on transferring knowledge from high-resource languages (En glish, French, Vietnamese, and Korean) to Thai, a low-resource language in this domain. To address data imbalance and scarcity, the study inves tigates transfer learning strategies ranging from zero-shot to "ultra-shot" (using only 60 labeled samples) and high-shot paradigms. Experimental results demonstrate that while generative models like mT5 perform well in zero-shot settings, the LaBSE model achieves a superior accuracy of 94.65% under high-shot fine-tuning. Notably, our proposed ultra-shot strategy enables LaBSE to reach 90.42% accuracy with minimal data, effectively bridging the performance gap without extensive annotation. These findings suggest a strategic approach for AI systems in railway op erations: rather than investing in large-scale datasets or computationally heavy models, operators can implement the ultra-shot strategy by fine tuning robust sentence-embedding models like LaBSE with a small set of gold-standard data to achieve optimal performance and cost-efficiency.
Authors - Nazar Melnyk, Oleksandr Korochkin Abstract - Reliable prediction of rare critical events is a key enabler for modern risk management, civil protection, and decision support sys tems, yet it remains challenging due to extreme class imbalance and strict requirements on false alarm rates. We present an ensemble learn ing framework that combines a deep feed-forward neural network with a Random Forest classifier, complemented by temporal feature engineering and precision-oriented optimization. The approach addresses three ob jectives: extracting informative temporal and regional patterns from raw event logs, learning calibrated probabilistic scores under severe imbalance using focal loss, and tuning per-region decision thresholds to achieve high precision while preserving acceptable recall. As a case study we apply the framework to air alert prediction over 25 administrative regions across 38 months, totalling 774,125 hourly observations. The system attains 96.13% accuracy, 75.1% precision, and 77.9% recall, demonstrating that high-precision early warning is feasible in strongly imbalanced settings. The framework is applicable to a wide range of safety-critical rare event prediction tasks.
Authors - Lavinia Chiara Tagliabue, Silvia Meschini, Viviana Vaccaro, Hira Ovais, Silvana Dalmazzone, Gianluca Torta, Ferruccio Damiani, Stefano Rinaldi Abstract - Named Entity Recognition (NER) is an essential task for sequence labelling and information extraction that plays a fundamental role in subsequent Natural Language Processing (NLP) applications, such as information retrieval, question answering, knowledge graph development, and machine translation. Although significant advancements have been made in NER for high resource languages, achieving effective entity recognition in Indian languages continues to be an unresolved research challenge because of linguistic diversity, complex morphology, typological differences, flexible word order, script differences, and prevalent codemixing. The scarce presence of annotated datasets and the lack of standardized evaluation metrics further limit supervised and transfer learning methods in these low resource environments. This document introduces a multilingual NER framework rooted in Sentence embeddings derived from Large Language Models (LLMs) and inference guided by prompts. The suggested method employs contextual; language independent embeddings obtained from pretrained multilingual LLMs to encode semantic representations of Indian and foreign languages within a common embedding space. Rather than using traditional token level classification, entity recognition and classification are achieved via structured prompting, allowing for zero-shot and few-shot generalization without the need for task specific finetuning. The system guarantees that entity identification and retrieval take place in the same language as the input text, maintaining linguistic accuracy and reducing error propagation caused by translation. To tackle domain variability and informal writing, constraints/guardrails for prompts and simple rule-based normalization are utilized to manage orthographic differences, script inconsistencies, and codemixed phrases often found in user generated content and social media. Experimental assessment across various Indian languages shows reliable enhancements in precision, recall, and F1score compared to traditional neural and transformer-based benchmarks, especially in low resource conditions. The findings suggest that embeddings powered by LLMs along with prompt-based reasoning provide a scalable and data efficient option for multilingual NER. This project advances the development of resilient, inclusive, and language adaptive systems for extracting information in linguistically varied settings.
Authors - Murat Aydın Abstract - Combining Particle Swarm Optimization (PSO) with gradientbased local search enhances efficiency in solving complex optimization problems. Existing hybrids often use fixed switching rules, causing premature convergence orwastedcomputation.We present an adaptive PSO–gradient descent method where stagnation detection triggers local refinement only when needed. Adam is employed for local search without extra parameters. Tests on seven benchmark functions show the approach achieves strong or competitive results on challenging cases while ensuring robust convergence on simpler ones.
Authors - Hiep. L. Thi Abstract - Flexible Job Shop Scheduling Problems (FJSP) involve large discrete decision spaces and strict feasibility constraints, making them challenging for deep reinforcement learning methods. In this work, we study how state represen tation and feature extraction architecture influence the performance of action masked Proximal Policy Optimization (PPO) in flexible scheduling. The scheduling task is formulated as a sequential assignment of operations to machines with a fixed discrete action space, where infeasible actions are removed using a feasibility mask. The environment state is represented using three heter ogeneous feature blocks describing resource availability, operation readiness, and time-related attributes of assignment alternatives. We compare a baseline single-branch encoder with a multi-branch feature extraction architecture that processes these blocks separately before aggregation. Experiments were conducted on the Brandimarte MK benchmark suite (MK01 MK10). Under identical training conditions, the multi-branch representation achieved lower makespan on 9 out of 10 instances, with relative improvements ranging from 2.4% to 27.8% compared to the single-branch baseline. The largest reductions were observed on MK06 (−27.8%) and MK10 (−25.2%), while per formance remained comparable on MK08. Training results indicate improved stability and more consistent convergence for structured representations. These results demonstrate that structured state design and feature extraction ar chitecture are critical factors in action-masked reinforcement learning for flexible job shop scheduling.
Authors - Karn Na Sritha, Khang Tran Chi Nguyen, Dao Khanh Duy, Khanista Namee Abstract - Multimodal affective computing system (MACS) aims to improve the affect prediction performance by fusing the complementary cues in visual and audio channels. While late fusion approaches are modular and can be flexibly deployed, they often rely on static modality weights which pre-assumes fixed reliability among modalities. In practical situation, visual stream can be corrupted by occlusion, variation of illumination and motion artifact while audio stream could be interfered by noise and reverberation or channel mismatch. Moreover, domain shifts between different datasets further contribute to the problem of in consistent calibration across modalities, which results in inaccurate fused predic tion. In this paper, a reliability-aware late fusion model is proposed to enhance ro bustness for multimodal emotion recognition. Based on the independently trained branches of FER and SER, we conduct an analytical process for theoretical var iance-covariance stability analysis of linear late fusion with respect to a modality imbalance condition. We further investigate entropy-driven reliability estima tion and calibration-aware weighting schemes. Experiment results from original test report are incorporated into the theoretical framework, it makes evidence that one modality’s dominance is more related to entropy stable and calibration char acteristics than raw unimodal accuracy. Our results also indicate that reliability aware weighting increases robustness under simulated degradation and missing modalities, without the need for retraining unimodal models.
Authors - Deepak sharma, Pankajkumar Anawade, Anurag Luharia, Gaurav Mishra Abstract - The rapid digital transformation of modern society has significantly increased the complexity of network infrastructures and the sophistication of cyber threats. Traditional rule-based and signature-based security systems are increasingly ineffective against advanced persistent threats, zero-day vulnera bilities, and AI-driven cyberattacks. Artificial Intelligence (AI) and Machine Learning (ML) have emerged as transformative technologies that enhance net work security through intelligent threat detection, automated response, and pre dictive analytics. However, the integration of AI and ML also introduces new vulnerabilities, including adversarial attacks, model poisoning, privacy con cerns, and algorithmic bias. This paper critically examines the evolution of net work security through AI and ML, analyzing both the technological advance ments and the emerging risks associated with their deployment. The study ar gues that while AI-driven security systems represent a significant improvement over traditional mechanisms, careful governance, transparency, and robust model protection are essential to mitigate new threats introduced by intelligent systems.
Authors - Isha Bhagat, Rishita Chourey, Anjali Kurhade, Vedika Desai, Meenal Kamalakar, Vishal Goswami, Nayan Wagh Abstract - In the shadow of overlooked safety violations, different factories have lost thousands, in terms of capital as well as lives. Which is especially harrowing as these were caused due to easily preventable work accidents or easily noticeable defective machinery. Our paper dives into how artificial intelligence based methodologies, particularly, would help in mitigating these risks based on past and present research. We also recommend a potential prototype system according to the findings from the literature we reviewed, for Real-Time worker safety check and automated industrial machine quality inspection system. We have reviewed four major topics pertaining to our system: [1] Personal Protective Equipment (PPE) compliance detection through CCTV monitoring as opposed to manual monitoring, [2] industrial machine quality inspection for automatic defect identification [3] evaluation of previously used object detection models and their performance for industry applications, and [4] system level considerations for practical deployment of the said systems on a large scale. We have compared methods, deployment strategies and results from existing studies to identify key criteria like scalable architectures as well as low latency processing. We are highlighting challenges such as insufficient annotated data for rare machinery defects, good accuracy in harsh industrial conditions that might hinder detection of safety violations, and ethical issues with worker monitoring as well in this paper.
Authors - P Subhash, P. Abhi Varshini, V. Udai Sree, P. Praneeth Reddy, Sai Mahitha Abstract - The recognition of transaction fraud in credit cards is a major problem that is still faced. It is mainly because of the gap between real and fraud transaction. In traditional methods, evaluations are mainly done with the main eye on accuracy, but it is sometimes inadequate and indecisive because the fraud occurrence is only 1% of all the data. Many studies in this field that have been done lately have focused on deep learning and machine learning structures. A very less number of works really stress on relatively simpler structures that can go well with imbalance and variance in class without the need of any complicated frameworks. A dataset that is publicly accessible has been used here for comparative study and has 284,807 transaction data. For classification, three learning algorithms like Logistic Regression, Random Forest, and XGBoost have been used. Precision-Recall AUC (PR-AUC), Matthews Correlation Coefficient (MCC), precision, and recall have been used to assess the model performance and not just accuracy. Random forest shows a steady outcome with a strong variance between false positive control and detection capability. The analysis also reveals that naive class-weighting strategies can significantly increase recall while producing impractically high false positive rates. Feature importance analysis further enhances interpretability and provides insight into influential transaction components.
Authors - Tintu Pious, Adon Hale J Payyapilly, Akshit Charan, Amal Suresh, Ashwin Babu Mampilly Abstract - The shift toward decentralized energy grids has established Vehicle-to-Vehicle (V2V) power transfer as a cornerstone of modern EV infrastructure. Central to this exchange is the Dual Active Bridge (DAB) converter, a bidirectional DC-DC topology prized for its high power density and galvanic isolation. The DAB utilizes two symmetrical H-bridges linked by a high-frequency transformer; one bridge acts as an inverter while the other performs synchronous rectification, depending on the power flow direction. Managing energy between independent batteries is challenging due to fluctuating voltage levels that create "moving targets" for control systems. Traditional PID loops often struggle with the instability caused by sudden voltage shifts in dynamic V2V scenarios. This project implements a Fuzzy Logic Controller (FLC) based on a voltage mapping principle. By comparing real-time voltage profiles of donor and receiver batteries, the FLC automatically determines the current direction and optimal phase shift angle without requiring complex mathematical modelling. Beyond emergency charging, this technology enables EVs to function as a mobile, distributed energy storage system within Smart Grids. It optimizes microgrid management in commercial hubs by sharing power autonomously, preventing transformer overload during peak demand. This approach ensures that decentralized energy sharing is both reliable and commercially viable.
Authors - Vladislav Vasilev, Georgi Iliev Abstract - In this paper we derive a new estimate of the channel bit rate. The estimates is a special transformation of the main EVT theorem that is particularly designed for use in telecommunication automated systesm meaning it’s robust to noise, computationally cheep, needs very few data points and no manual validation. Due to the EVT methodology we can evaluate if the bit rate can keep dropping indefinitely or if it has a guaranteed minimum value. The method is relatively fast because it uses Newton’s interpolation instead of hypothesis testing or regression.
Authors - Deepak sharma, Pankajkumar Anawade, Anurag Luharia, Gaurav Mishra, Akshit Yadav Abstract - The exponential growth of cybercrime, cloud-native infrastructures, Internet of Things (IoT) ecosystems, encrypted communications, and AI enabled adversarial techniques has fundamentally challenged traditional digital forensic methodologies. Conventional forensic frameworks developed for static systems cannot scale to high-velocity, heterogeneous data environments. This study proposes and empirically evaluates a lifecycle-oriented AI-enhanced digi tal forensic architecture integrating machine learning (ML), deep learning (DL), graph analytics, and explainable AI (XAI). Across benchmark datasets in intru sion detection, malware classification, multimedia authentication, and textual intelligence extraction, AI-enhanced systems significantly improved detection accuracy (up to 98.3%) and reduced analyst workload (40–60%). However, ad versarial robustness testing and explainability evaluation reveal governance and admissibility challenges. The findings demonstrate that while AI enhances scalability and zero-day detection, its responsible adoption requires reproduci bility controls, interpretability safeguards, and alignment with legal standards such as Daubert.
Authors - Netochukwu Onyiaji, Lukas Cironis, Leonid Bogachev, Liqun Liu, Janos Gyarmati-Szabo, Roy A. Ruddle Abstract - This study examines the adoption of AI-enabled hotel chatbots by investigating the role of technology readiness and consumer perceptions in shaping guests’ attitudes and behavioral intentions. Drawing upon the Technology Acceptance Model (TAM) and the Technology Readiness Index (TRI 2.0), the research integrates technological and psychological determinants of AI service adoption in hospitality settings. Data were collected from 270 hotel guests who had previously interacted with chatbots in four-star hotels in Jakarta and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results indicate that technology readiness, perceived convenience, and perceived information quality significantly influence guests’ attitudes toward AI hotel chatbots. However, attitude and perceived convenience do not directly translate into adoption intention, revealing an attitude–intention gap. The model explains 61% of the variance in attitude and 38% in behavioral intention. These findings extend technology adoption literature by highlighting the role of psychological readiness and service perceptions in shaping guest adoption of AI-enabled hospitality technologies.
Authors - Aung Nyein Chan Paing, Sudhir Kumar Sharma Abstract - This paper presents a semantic video search system that supports natural lan guage querying over video content using vision–language models and vector similarity search. The proposed system processes videos offline by extract ing representative frames through similarity-based filtering, generating textual descriptions using a pre-trained BLIP (Bootstrapping Language–Image Pre training) image captioning model, and encoding the captions into dense vector embeddings. These embeddings are indexed in a vector database to enable effi cient retrieval of relevant video segments based on textual queries. The system architecture comprises a Python-based backend with GPU acceleration for video processing and a web-based interface for query interaction. Experimental obser vations indicate that similarity-based frame filtering reduces redundant frames by approximately 50–70% while preserving semantic information. Qualitative eval uation demonstrates that the system effectively retrieves semantically relevant video timestamps in response to natural language queries. The proposed frame work serves as a modular prototype for content-based video retrieval and semantic video analysis applications.
Authors - Qing Li Abstract - Intrusion Detection Systems (IDS) are critical for cybersecurity, yet conventional approaches based on machine learning often suffer from limited explainability, high computational cost, and scalability issues. We introduce Recommendation-Driven IDS (RD-IDS), a novel framework that models security events and detection rules as a hypergraph, reformulating intrusion detection as a structured recommendation problem. Detection is achieved through the computation of minimal transversals, identifying minimal and actionable sets of security measures. RD-IDS is formally defined with hypergraph representations, recommendation semantics, and UML-based architecture, ensuring traceability and modularity. Algorithmically, we leverage minimal transversal enumeration, including the Fredman–Khachiyan dualization method, and analyze temporal and spatial complexity, demonstrating that structural reductions and active set optimizations mitigate overhead. RD-IDS offers deterministic, explainable, and scalable detection by construction, providing a principled alternative to machine learning-centric IDS. This work establishes the formal and algorithmic foundations of RD-IDS, laying the groundwork for practical implementation and experimental validation in a companion study.
Authors - Ahmed Alansary, Molham Mohamed, Ali Hamdi Abstract - Quantum secret sharing (QSS) scheme is a cryptographic protocol for sharing a secret among parties in a secure way, such that only the set of all authorized parties can reconstruct the secret using the quantum information. In this manuscript, a multi-secret sharing scheme (namely, qMSS) is proposed and analyzed utilizing a quantum error-correcting code (CSS code) for generating and reconstructing shares. qMSS generates n quantum shares of an m(≤ k)-bit classical secret using [[n,k,d]]q CSS code and distributes shares among n participants. This work generalizes the sharing of one-bit classical secret, utilizing CSS codes, proposed by Sarvepalli and Klappenecker. The set of all authorized parties is identified by minimal codewords associated with the classical code underlying the CSS code. The proposed qMSS is a perfect multi-secret sharing scheme due to the set of all unauthorized parties is unable to obtain any information about the secret.
Authors - Pratham Vasa, Amishi Desai, Chahel Gupta, Avani Bhuva, Mohini Reddy Abstract - Content Delivery Networks (CDNs) play an essential role in enhancing the content delivery speed by caching frequently requested data in edge servers distributed across geographical regions. Traditional CDNs utilize rule-based pol icy and machine learning approaches for optimizing the cache. Machine learning is performed centrally, and the cache optimization is performed using the traffic logs collected by the central server. Although the use of central learning ap proaches is beneficial, it poses certain limitations, including data privacy and high communication cost. The central learning approach aggregates raw data, which poses data privacy issues. This paper proposes an architecture for secure federated learning, which is utilized for cache hit prediction in CDNs. The proposed archi tecture is evaluated using a synthetic dataset containing 1,30,548 records, and the features include temporal and network features. The proposed architecture is com pared with the traditional central learning approach, and the results reveal that the secure federated learning model achieves an accuracy of 70.15%, which is com parable to the central learning approach. The proposed architecture is found to reduce data privacy exposure by 30%.
Authors - Syed Shanika Zaida, Kamineni Leela Tapaswi, Kilari Dhana Malikarjuna Rao, Adarapu Sandeep, Amar Jukuntla Abstract - Removable USB storage devices are widely used in day-to day computing, but they also introduce risks such as unauthorized data transfer and misuse of external media. Understanding how these devices are used on a system is important during forensic investigations, espe cially when analyzing potential data leakage incidents. On Windows sys tems, traces of USB activity are not stored in a single location. Instead, they are distributed across registry entries, system logs, and file system records. Examining these sources individually often makes it difficult to form a clear picture of events. This paper introduces a forensic frame work that brings together USB-related artifacts from multiple system components and analyzes them in a unified manner. The method gath ers data from sources such as registry entries, Plug-and-Play logs, and f ile system structures, and then aligns them based on their timestamps. A Python-based implementation is used to automate this process and to relate device connection events with file operations. Experiments con ducted on a Windows setup show that the framework can identify device usage and reconstruct the sequence of related activities with clarity. By combining evidence into a single timeline, the approach helps simplify analysis and supports consistent interpretation of results.
Authors - Sanchi Mahajan, Nandini Jain, Evangelin G, Jansi K R, Shivam Shivam Abstract - The issue of efficient work planning in heterogeneous multi-cloud in frastructures is still an open issue due to scalability limitations, data privacy, and latency sensitivity. The conventional centralized scheduling approach requires data aggregation, which is associated with critical privacy challenges and com munication cost. The proposed work aims to design a privacy-preserving feder ated multi-cloud task scheduling framework for smart mobility applications to overcome the limitations of conventional approaches. The proposed framework employs a decentralized scheduler for separate cloud regions. The proposed framework employs a novel task abstraction approach to transform real-time traffic data into task-scheduling forms. The proposed framework eliminates the requirement to communicate raw traffic data by employing a federated learning based aggregation approach. The proposed framework employs a federated ag gregation approach, which is associated with scalability, routing, and multi cloud coordination while ensuring data locality. The proposed framework is evaluated by conducting experiments on Random, Rule-Based, Local-ML ap proaches using a Smart Mobility dataset. As can be observed from the results, considerable reductions in communication overhead and privacy leakage are achieved with the preservation of competitive execution latency and SLA com pliance. The strategy has been observed to scale well with an increase in cloud regions, as the communication scalability results indicate. It is the ability to sup port federated, scalable, and privacy-aware job scheduling for smart traffic sys tems without central data sharing that makes this work interesting.
Authors - Thota Neha, Napa. Sai Gopi, R. Aarthi Abstract - The increasing realism of deepfake media has raised signifi cant concerns regarding the authenticity of digital content. Most existing detection methods rely on audio–visual fusion, which often introduces ad ditional complexity and may degrade performance when one modality is unavailable or unreliable. This work presents a dual-stream deep learning framework that pro cesses audio and video independently, avoiding explicit fusion. The au dio stream employs a CNN–BiLSTM model on log-Mel spectrograms to capture temporal and spectral artifacts, while the video stream uses EfficientNet-B0 with BiLSTM to model spatial inconsistencies and tem poral variations in facial sequences. Experiments conducted on multiple benchmark datasets, including ASVspoof 2019, WaveFake, LJSpeech, FaceForensics++, and Celeb-DF (v2), demon strate that the proposed approach achieves competitive detection perfor mance. In addition, the framework maintains robustness under missing modality conditions and offers improved interpretability compared to fusion-based methods. These results indicate that independent modality-specific learning pro vides a practical and effective alternative for deepfake detection in real world scenarios.
Authors - Ankit Podder, Piyush Ranjan Das, Soham Acharya, Ayushmaan Singh, Soumitra Sasmal, Partho Mallick Abstract - Static perimeter-based security architectures are now inef fective in the current threat scenario. The ability of attackers to obtain legitimate credentials and the presence of zero-day exploits often cause real-time breaches of the network perimeter. An area of concern is the real-time monitoring of these systems. In the current scenario, security monitoring is performed in a segregated manner, where network analysts analyze time-stamped network logs and identity analysts analyze time stamped login attempts, without cross-referencing in real time between these two domains. The proposed solution is a fusion platform capable of ingestion of raw network transport data and real-time human element monitoring data. This is achieved through the integration of two dif ferent threat detection mechanisms using a FastAPI backend. The first threat detection system will be the Network Threat Detector (NTD), im plemented in Python and using the Scapy library to parse deep packet data in real time for flow analysis. The second threat detection system will be a JavaScript tracker designed for monitoring digital behavioral indicators and calculating real-time metrics such as mouse velocities, ac celerations, kinematic jerk, and typing speeds. Real-time monitoring will be achieved through a machine learning framework with three different modules for inferring user intent using the Random Forest algorithm, detecting anomalous statistical patterns using the Isolation Forest algo rithm, and detecting malicious plaintext syntax using Logistic Regres sion. The system has been tested in a lab scenario and has been able to classify user session states into four different states: Engaged, Con fused, Frustrated and Suspicious with accuracy exceeding 95%. These digital behavioral indicators will be fed into the Network Transport Data (NTD), allowing the computation of a real-time risk score.
Authors - Lavu Uha Saranya, T.V.S.S. Reddy, I.V.M.K. Sarma, Dipesh Kumar Kushwaha, T.N.V.D. Sai Krishna Abstract - Digital Forensic investigations have typically focused on the identification of private browsing at the application layer using artifacts from memory and disk, as well as the fact that modern browsers rely extensively on the operating system for fundamental capabilities such as rendering, input processing, and networking. This paper extends the forensic scope by demonstrating that session Data related to private Sessions remain in shared Subsystems of the OS in Volatile Memory. In particular, This paper examines the three primary components of the linux desktop environment: the display compositor (GNOME shell); the Input Pipeline (IBus Daemon); and the network resolver (systemdresolved). utilizing physical memory acquisitions via LiME on an ubuntu 25.04 System, This paper monitored the migration of high entropy inputs across these subsystems. The results of this research indicate that critical session data including: Window metadata associated with wayland sessions; Plaintext keystroke data received through D-Bus; and fallback queries made via DNS-over-HTTPS were found to remain in OS Managed Memory for extended periods of time after the conclusion of the private browsing session. The author provides a reproducible framework for analysis of memory associated with the OS level and demonstrates that browser based privacy controls are structurally insufficient to fully sanitize volatile memory.
Authors - Venkata Saikumar Thalupuru, Shubham Kumar, Santhoshini Pranathi Singaraju, Vishal Gupta Abstract - As the use of online banking and digital payments grew faster, that has also left the institution at risk of becoming the victims of credit card fraud, which has become a major challenge for traditional banks and other financial institutions. This huge discrepancy in transaction datasets is one of the greatest challenges in fraud analytics wherein only the rare fraudulent activity takes up a tiny fraction of the total transaction. Traditional machine learning models are often quite accurate but not great at detecting occasional frauds. To overcome this limitation, this study proposes a cost-aware hybrid framework comprising Attention-based Long Short-Term Memory (Attention-LSTM) and ensemble-based machine learning. This method will take care to preprocess the data, maintain balance among classes using SMOTE, select features based on mutual information by leveraging a soft-voting ensemble of the Logistic Regression, Random Forest, and the XGBoost models. Cost-aware learning is coupled with decision threshold enhancement to minimize false negative predictions. Additionally, SHAP-based explainability is added on top for enhanced transparency and interpretability of the model. The experimental results show 99.3% accuracy, 0.905 precision, 0.892 recall, 0.898 F1-score, and 0.98 ROC-AUC, indicating that our new framework is effective in detecting genuine financial fraud.
Authors - Ismail Suleiman, Dinesh Reddy Vemula, Abhaya Kumar Pradhan Abstract - This paper presents the evaluation and demonstration phases of a Design Science Research Methodology (DSRM) study that produced the Organisational Security Culture Framework (OSCF) for Namibian Public Enterprises. An empirical needs assessment established a three-tier security culture maturity deficit: a 40% policy awareness gap; a widespread misconception among non-IT staff that cybersecurity is solely an IT responsibility; and a training gap in which 25% of staff had received no formal security training in the preceding year. The OSCF comprises five interrelated components: Risk Assessment, Security Policy and Enforcement, Security Compliance, Training and Awareness, and Ethical Conduct. Demonstration was executed across four staged phases: baseline assessment, component testing, pilot integration, and full-scale deployment. Evaluation employed a dual approach: expert panel review against eight criteria and Key Performance Indicator (KPI) measurement across five strategic objectives. Results confirm that the OSCF closed the 40% policy awareness gap, achieving 95% staff awareness post-implementation, and significantly reduced phishing susceptibility. Seven evidence based refinements evolved the OSCF from a static policy model into a continuous security culture maturity loop. The framework’s modular, tiered architecture supports long-term sustainability of behavioural change and scalable deployment across organisations of varying cybersecurity maturity, including federated multi-institutional environments.
Authors - Konstantina Rigou, George Dimitrakopoulos Abstract - The rapid adoption of Artificial Intelligence (AI) in high-impact domains (healthcare, finance, justice) creates an urgent need for sys tems that are legally compliant, explainable, ethical and transparent. Decision Support Systems (DSS) aim to assist managerial and professional decision-making, yet few works translate legal and ethical principles into concrete technical design constraints for explainable AI (XAI). This paper proposes a Legal Explainability Framework (LEF) that maps legal obligations (General Data Protection Regulation, European Union Artificial Intelligence Act) and ethical principles to measurable XAI requirements and implementation steps, and demonstrates the approach with a prototype using an open legal dataset derived from judgments of the European Court of Human Rights (ECtHR). The results show that legally compliant XAI is not merely a normative aspiration, but a technically feasible and practically implementable design paradigm.
Authors - P.Pandiaraja, N.Shiva Kumar, B.Vishnu Vardhan, C.Sevarathi, Charles Prabu V, S.Jagan Abstract - Retrieval-Augmented Generation (RAG) chatbots represent a significant advancement in intelligent conversational systems, grounded in the prin-ciples of natural communication, accuracy, and reliability. Traditional chatbots are constrained by pre-trained knowledge or rule-based responses, limiting their effectiveness in dynamic and complex real-world scenarios. RAG-based systems integrate information retrieval mechanisms with sophisticated language generation models to identify relevant knowledge in real time and produce contextually appropriate responses. The proposed system employs sentence-transformers (all-MiniLM-L6-v2) for dense vector embeddings and FAISS as the vector data-base backend, enabling fast and semantically accurate document retrieval. Ex-perimen- tal results demonstrate a mean retrieval accuracy of 87.4%, an average response latency of 1.3 s, and a user satisfaction score of 4.2 out of 5, confirm-ing the system’s readiness for real-world deployment.
Authors - Manjula K, Vijayarekha K, Venkatraman B Abstract - The fabrication of components across various industries is accom plished through welding. Although welding has been practiced for more than a hundred years, defects may still occur during the welding process. Thus, indus trial standards require welded joints to be inspected and evaluated to ensure their quality and reliability. Conventional ultrasonic testing (UT) has long been widely used in industry for detecting and evaluating defects in weld specimens. Over the last few decades, advances in sensor technology and signal analysis techniques have significantly advanced ultrasonic testing methods. Advanced methods, such as Time Of Flight Diffraction (TOFD), are more likely to detect linear defects. However, one of the major challenges in applying TOFD to the inspection of austenitic stainless steel (ASS) weldments is noise in the signals. Various signal processing approaches have been developed to suppress such noise, each with its own advantages and limitations. In this work, the focus is placed on the applica tion of multi-level discrete wavelet transform (DWT) decompositions with ‘n’- order wavelet filters for de-noising ultrasonic TOFD A-scan signals. The results show that this approach achieves greater improvement in signal-to-noise ratio (SNR) while requiring less computational time.
Authors - Likhitha Ragha Ramya Nakka, Anuradha Andra, Appalaswami Ravada, Vinay Kumar Pamula Abstract - This study uses Roland Barthes' semiotic approach to analyze how meaning is represented in HMNS' Untitled Humans ad on Instagram Reels. Understanding how storytelling campaigns create and communicate meaning has become crucial for successful digital marketing as social media plays a big-ger role in brand communication strategies. This study examines a selection of Instagram Reels content from the official Instagram @hmns account using a qualitative-descriptive methodology, emphasizing how text, sound, and visual components interact to provide multiple interpretations. The study methodically sign how everyday occurrences, human relationships, and nature scenery are turned into symbolic representations of authenticity, freedom, and personal identity using Roland Barthes' three-level semiotic framework: denotation, connotation, and myth. Direct observation and content documentation of Reels recordings are used for data gathering, and triangulation is used for analysis to guarantee validity and thoroughness. Results show that by creating an existential story that prioritizes closeness, introspection, and human connection, the campaign goes beyond traditional product advertising. Authentic, unconstructed life imagery is presented at the denotative level, visual and musical elements evoke emotion and personal memory at the connotative level, and perfume, rather than being a commercial product, becomes a symbol of emotional intimacy and identity exploration at the mythic level.
Authors - Deepak Mane, Siddhi Dhamal, Shivam Devkar, Divit Maheshwari, Riddhi Kaulage, Diya Nair, Deepak R. More Abstract - The evaluation of handwritten answers sheet has so many challenges since from many years due to variability in handwriting, linguistic barrier and personal bias. This is very time-consuming method and inconsistent method which highlights the need for automated subjective answers evaluation. Here, proposed automated handwritten answers evaluation system uses TrOCR based handwritten answer detection, NLTK tokenization, WordNet lemmatization and semantic similarity check between teacher’s and student’s answer based on meaning. This advanced multi-model system overcomes traditional keyword matching technique and improves contextual accuracy. This system also overcomes traditional manual checking and results in fast evaluation. The system promotes the fairness, fast and accurate processing. Moreover, the suggested framework removes human fatigue, encourages fair grading, and offers a solution that can be used for large-scale academic tests. The results show that this automated method not only works like a human brain but also makes the evaluation process more fair and open.0
Authors - Deepak Mane, Deepak R. More, Arya Kale, Ravina Jagtap , Soumya Dubewar , Diya Nair Abstract - Timely detection of crop diseases is essential to ensuring high agricultural produc- tivity; thus, early and accurate detection has always been a priority for the farmers. So we pro- posed a deep learning based framework that classifies the condition of basil leaves in three cat- egories - wilting, infection by mildew and healthy - through an EfficientNet-B0 convolutional neural network fine-tuned using transfer learning. We leverage a curated dataset of 1,442 plant images available at the Roboflow platform, splitting the dataset into 70% training, 20% valida- tion and 10% testing. Transfer learning was used where we started EfficientNet-B0 with weights learned on large scale ImageNet pretraining. Training was done in two stages: first the whole model was trained with the backbone frozen and only the newly added classification head being trained, followed by unfreeze the last 100 layers and perform fine-tuning to the domain. Leaf orientation and illumination variability were treated by a group of data augmentation methods including random horizontal flipping, rotational transforms, zoom perturbations, and contrast adjustments. The proposed system achieved a remarkable result with high generalization of 96.6% training accuracy and 97.8% test accuracy. The detailed analysis of the confusion matrix and the ROC-AUC curves corroborate faithful multi-class discrimination. A Streamlit-based web interface was also developed to facilitate live inference, farmers and agronomists are now able to make immediate predictions of the disease with confidence estimates. The results showed that the well optimized EfficientNet-B0 model can be a feasible and scalable solution for automated monitoring of crop diseases in the context of smart agriculture.0
Authors - Vinodkumar Bhutnal, Prajwal Vijay Sonawane, Om Vinod Chaudhari, Avinash Golande, Mohit Ashok Tajane, Sujal Kishor Papdeja Abstract - There is no more pressing issue in modern cities, industries, and public venues than nighttime security, as the conventional approach of patrolling in-person only works well until fatigue and coverage become challenges, when humanity and human error become a finite issue that requires short delay interruptions. Urbanization, increased crime rates, and the inadequacy of current traditional patrolling to provide a sufficient security posture have led to the proposal of an Intelligent Night Patrolling System that uses edge-cloud frameworks, IoT-enabled CCTV camera technology, and artificial intelligence video analytics to significantly reduce the presence gap. This system will provide continuous, real-time proactive surveillance of locations and even be equipped with advanced deep learning models like Cummings Neural Networks (CNNs) and Long Short term Memory (LSTM) to detect suspicious activity, anomalies, intrusions, and violent types of activities. This research introduces the concept of Night Patrolling System designed to assist security personnel during night surveillance.The proposed system achieves an estimated accuraxy of over 90% with a reduced latency , demonstarting it’s effectiveness for a real time survillence applications.
Authors - Deepak T. Mane, Deepak R. More, Gopal D. Upadhye, Rucha C. Samant, Hemlata U. Karne, Suraksha Suryawanshi, Prem Borse Abstract - Efficient vehicle type classification is vital for intelligent transportation systems, traffic monitoring, and urban mobility planning. This paper presents a Real-time Multimodal Vehicle Type Classification System that leverages both visual and acoustic data to identify and categorize vehicles such as cars, buses, trucks, and motorcycles from live video streams. The proposed system integrates CNN-based and Transformer- based models for feature extraction across modalities, enhancing detection robustness under diverse lighting, weather, and traffic conditions. A lightweight preprocessing pipeline performs synchronized frame extraction, audio segmentation, and feature fusion while ensuring minimal latency in real-time environments. The proposed multimodal architecture combines late fusion of visual and audio features to enhance the reliability of classification when either modality is suffering from low visibility or occlusion. Experimental evaluations demonstrate that the proposed framework achieves a classification accuracy of 96.2% at 28 fps, outperforming unimodal baselines with real-time efficiency. This system is deployable for intelligent traffic surveillance, automated tolling, and urban safety analytics.
Authors - Shwetha Ramadas, Krutthika Hirebasur Krishnappa, Sudhir Trivedi Abstract - Methane (CH4) emission from rice paddies is a significant source of greenhouse gas emissions from agriculture. Currently, most models for methane prediction from rice paddies depend on collecting field data and sending it to a server. In this new paradigm, several privacy concerns arise, model scalability is restricted, and a large number of data points are exposed to the attacker. This paper addresses all privacy con cerns by providing an edge-based solution for modeling methane emis sions from rice paddies that leverages data from edge sensors at respec tive locations, while keeping individual sensor data private. The method employs different machine learning (ML) algorithms, including Linear Regression, Random Forest, XGBoost, and a Feedforward Neural Net work (FNN), implemented using TensorFlow Federated (TFF) in both centralized and federated learning (FL) frameworks. The FL-based FNN achieved an R2 score of 0.91, which was superior to both centralized classical and centralized FL models, especially for highly non-IID client side data distributions in sensor datasets. In summary, this paper extends the current literature on modeling methane emissions from rice paddies and provides a comprehensive evaluation of our proposed FL system ar chitecture, an in-depth discussion of the communication resources re quired for FL implementation, and an examination of the effects of abla tion studies on clients’ data heterogeneity. Therefore, the proposed FL approach is efficient and scalable, enabling safe, privacy-preserving modeling of methane emissions from rice paddies to effectively imple ment Climate Smart Agriculture (CSA) and mitigate global warming while supporting sustainable rice cultivation.
Authors - P. Pandiaraja, P.Krishna Kishore, E. Ganesh, C. Selvarathi, Charles Prabu V, S. Jagan Abstract - Large Language Models have facilitated the development of sophist i-cated smart platforms that are actively leveraged in the provision of financialservices to various classes of customers. This advancement has enabled peopleto obtain individual financial advice. This paper presents a framework for buil d-ing a financial chatbot that incorporates Retrieval Augmented Generation(RAG) technology and several SQL agents to improve reliability. The proposedapproach addresses five fundamental challenges in financial artificial inte ll igence: eradicating hallucinations, obtaining up to date information, utilising u s-er facts to tailor individual suggestions, safeguarding user privacy, and provi d-ing clear explanations. RAG is used to retrieve verified financial knowledge,while SQL agen ts query databases to produce accurate outputs. The solutionprovides advisory responses that are relevant to users and protect sensitive i n-formation through a zero trust security architecture. The system architecture i n-corporates multiple validation check points and is dynamically configured tomeet individual user requirements. Experimental results demonstrate a 96.2%accuracy rate in handling financial queries with a 3.8% error rate and a mean r e-sponse time of 1.5 seconds, outperforming comparable solutio ns. The proposedarchitecture establishes a reliable baseline for financial professionals seekingdependable advisory services.