Authors - Yohan Ranasinghe, Janice Abeykoon, Samantha Kumara Senavirathna Abstract - Efficient blood supply chain management is a critical global impera tive in healthcare, yet it is consistently hampered by significant post-expiry blood wastage. This issue, prevalent across diverse healthcare systems, represents a considerable loss of a vital and non-substitutable resource, primarily stemming from challenges in accurate demand forecasting and dynamic inventory coordi nation. To address this pervasive problem, this research proposes and validates a novel data-driven framework. The approach leverages a multivariate deep learn ing forecasting model, specifically a Multivariate Long Short-Term Memory (LSTM) network, integrated into a comprehensive platform designed for proac tive inventory management. The model's development and empirical validation utilize historical blood collection and transfusion data (January 2020 – December 2024) from a cluster center of the National Blood Transfusion Service (NBTS) in Sri Lanka, serving as a representative case study to demonstrate real-world applicability. The framework incorporates multivariate factors such as historical transfusion patterns, seasonal variations, and interdependencies between blood groups to generate more accurate demand predictions. The integrated system, de signed to support real-time inventory monitoring, automated near-expiry track ing, and digital blood request and redistribution mechanisms, aims to align blood supply with anticipated demand. The findings of this research demonstrate that this integrated deep learning and inventory optimization framework significantly improves blood stock utilization, minimizes wastage, and enhances the overall efficiency of blood supply systems. It offers a scalable and ethically governed solution, contributing broadly to efforts in sustainable healthcare delivery world wide.
Authors - Rashmi Y Matt, Shreya Srinivasan, Venkata Sravani Revuri, Vismaya Murali, Chandravva Hebbi, Natarajan Abstract - Preparing for technical interviews has become very challenging for computer science students due to highly competitive hiring environments and the lack of company-specific practice resources. Existing resources and Generative platforms provide generic questions that do not reflect the specific patterns, technical focus areas, or expectations of different requirements.To address this gap, we present a system that combines a structured knowledge-graph-based retrieval module with a fine-tuned LLamA-2-7B model to generate company-specific technical interview questions. The data set contains 28,854 curated questions from 470 companies, which were cleaned and used for finetuning. The proposed framework also integrates an evaluation pipeline using both LLM-as-a-Judge and manual scoring to check validity, clarity, and technical correctness.The fine-tuned LLamA-2-7B model integrated with the knowledge graph retrieval achieved the best performance, which significantly outperformed other generative models in producing contextually appropriate and technically relevant questions. This approach aims to provide students with more targeted preparation resources aligned with real-world hiring expectations.
Authors - Halima Tuj Saydia, Partha Chakraborty Abstract - The mental health issues, such as stress and suicidal threats, have become a major public health concern for students and young adults. Early identification of such conditions is important for timely interventions and prevention. The study aims to develop a two-stage hierarchical framework to predict stress and suicide risk early. It is based on the questionnaire survey dataset of 1436 responses. The hierarchical method utilizes psychological and lifestyle characteristics gathered through surveys, thereby eliminating the need for physiological sensors. The first stage develops machine learning (ML) models, namely XGBoost, Random Forest (RF), and Support Vector Machine (SVM), to detect stress. These models have achieved an accuracy of 93%, 88%, and 83%, respectively. If the individual is detected as stressed, it moves to the second stage for suicide risk detection. Deep learning (DL) models, mainly Artificial Neural Network (ANN), Deep Neural Network (DNN), and Recurrent Neural Network (RNN), are developed in the second stage. They have achieved accuracy of 94%, 90%, and 89%, respectively. The study presents a scalable, data-driven framework that supports early mental health screening in resource-limited communities.
Authors - Satrasala Hari priya, Sabhya Kulkarni, Sindhu Baddela, Spoorthi Krishna Devadiga, Suja CM Abstract - This paper evaluates the quantum entanglement techniques for the detection of Parkinson’s disease using multimodal clinical data from the PPMI database. Four encoding techniques are evaluated: Amplitude Encoding, Dense Angle, IQP-based Pauli, and Hierarchical. The results of the analysis indicate that accuracy and the efficiency of the circuit are greatly impacted by the entanglement technique. Amplitude Encoding is the most efficient for NISQ computers (92.00% accuracy, 6-depth circuits), while Dense Angle provides the highest accuracy (92.59%). Hierarchical entanglement is the least efficient (80.86%), showing that too much depth causes optimization difficulties. These results provide practical recommendations for the design of quantum circuits for medical diagnosis.
Authors - Chandan Kumar, Supriya Narad Abstract - In the contemporary digital landscape, the proliferation of cyber threats has become a pervasive and escalating concern, posing imminent dangers to individuals, businesses, and entire nations. Cyber intelligence emerges as a critical component in the ongoing battle against these threats, involving the systematic gathering, analysis, and dissemination of information pertaining to cyber threats, actors, and vulnerabilities. This research paper aims to provide an insightful examination of the existing landscape of cyber intelligence, delineating its fundamental sub-domains and highlighting areas ripe for future research. The paper begins by delving into the current state of cyber intelligence, emphasizing the dynamic nature of the digital threat landscape. It elucidates the multifaceted challenges posed by cyber threats, underscoring the need for a proactive and adaptive approach to intelligence gathering and analysis. This section also explores contemporary technologies and methodologies employed in cyber intelligence, ranging from advanced analytics and machine learning to threat intelligence platforms.
Authors - Shubham Kadam, Chhitij Raj, Pankajkumar Anawade, Deepak Sharma Abstract - Higher education in India is poised at a junction and change seems to be driven by the issues of quality, access and sustainable development. In this framework, the HR sustain ability is essential for recruiting, hiring and retaining competent employees. This paper discusses ICT enabled practices of Indian deemed universities in the direction of promoting HR sustaina bility. Drawing on Review of literature and theme analysis, it explores e-based practices such as e-recruitment, digital training, online performance management, wellness technologies digital knowledge collaborations platforms. The study reveals that adoption of ICTs promotes effective ness, transparency and inclusivity of HR functions through the maintenance continuous staff de velopment. Nonetheless, other contributors such as leadership support, digital literacy and policy environment were found to significantly influence implementation outcomes. Digital divides, lack of training, data privacy and cost are some of the other concerns highlighted by the review. An overview of future themes in which AI, personalized HR services, and eco- sustainable ICT platforms will play a significant role into developing Future-proofed University.
Authors - Mr. Shubham Kishor Kadam, chhitij Raj Abstract - Increasing demands of universities to become sustainable in their practice and the necessity to compete in the global arena have compelled higher education to the implementation of green communication infrastructure and smart ICT solutions in every facet of the university practice. As an ingredient of this change, there is the HR sustainability: that we will go digital faculty and staff, and at the same time retain them in friendly and efficient and inclusive systems that are environmentally friendly. The emergence of the green communication systems, intelli gent ICT infrastructures, and green HR practices is helping the higher education sector to fund their future in this paper. The article is narrowed down to new practices, such as the hiring without paper, the use of mobile based performance management and virtual training, that is generated under the secondary research and conceptual framework. It also talks about the benefits, chal lenges and opportunities of such system in higher learning institutions. The findings suggest that the effective adoption of the sustainable ICT will help improve the performance of the organiza tions, reducing the impact on the environment to a minimum and being part of the creation of the digitally resilient human resources.
Authors - Lakshmi BV, Anupriya S, Ningappa B, Diganth SD, RoopaRavish, Prasad B Honnavalli Abstract - Modern car infotainment head unit has become a highly connected cyber-physical system, incorporating Wi-Fi, Bluetooth, USB ports, and the Controller Area Network (CAN) bus. While such capabilities enhance the user experience, they also raise the susceptibility of the vehicle to attacks, and hence there is a need to assess the security of the vehicle. This paper performs a comprehensive penetration test on an infotainment system, examining wireless, wired, and in-car communication channels. For the Wi-Fi component, we performed a series of attacks such as Distributed Denial-of-Service (DDoS), deauthentication, MAC and IP spoofing attacks, creating fake access points, and WPA-based attacks to determine the robustness of the system against network-level threats. Bluetooth attacks included device snarfing, replay attacks, manual packet injection attacks, and unauthorized access to data. USB attacks were employed to analyze the dangers posed by connected devices, including the extraction of GPS information, log files, SMS messages, and access to the microphone and camera. For the CAN bus, we performed replay attacks, flooding attacks, manual frame injection attacks, and manipulation of sensor information such as humidity and temperature readings. The outcome of each of these attacks indicates that the infotainment system can serve as a means through which attackers gain access to the vehicle's network, and hence the need for enhanced authentication, improved security for the interfaces, and real-time monitoring for security breaches. This paper provides valuable information for enhancing the security of modern car infotainment systems and contributes to the efforts being made in the field of automotive cybersecurity.
Authors - Tejaswini Borkar, Kajal Salampuriya Abstract - This paper focuses on the product of state of the art artificial intelligence (AI) language models (that is, ChatGPT, Perplexity, and Grok) to generate and test algorithmic trading strategies in financial markets. With such AI tools in the field, the study examines the success of the tools in cases of generating trading signals, synthesizing market sentiment, and helping manage risks both through quantitative backtesting and through qualitative analysis. The conclusion is that though the procedures performed using AI-assisted tactics may be comparable to the findings of the use of conventional algorithmic processes and will outline beneficial information, the findings should undergo tangible verification and cautious human interventions to establish dependability and applicability. Our findings are indicators of the potential of the large language models as an addition to assist traders and researchers and indicate that caution is still necessary to integrate with the long-established quantitative methods and risk management functions.
Authors - Niraja Jain, Rajeev Kumar, Golnoosh Manteghi Abstract - Medical negligence litigation in India poses significant challenges to the justice delivery system due to the complexity of clinical evidence, fragmented legal documentation, and limited availability of structured decision-support mechanisms for legal practitioners. These challenges often result in delays, inconsistent legal reasoning, and increased cognitive burden on judges and lawyers handling medico-legal disputes. This paper presents the design and preliminary validation of a Judicial Decision Support System (JDSS) tailored specifically for medical negligence litigation in the Indian legal context. The proposed JDSS leverages advanced Natural Language Processing (NLP) techniques and supervised machine learning models to assist early-stage legal triage through automated case summarization, statutory section prediction, and precedent recommendation. Transformer-based language models are fine-tuned on publicly available Indian legal judgments and augmented with a domain-specific legal–medical ontology to bridge semantic gaps between clinical narratives and legal reasoning. Explainability is embedded at both the model and user-interface levels through attention visualization and feature attribution mechanisms, addressing transparency requirements critical for high-stakes judicial applications. The system has undergone formative evaluation through an exploratory stakeholder survey involving participants from legal, academic, and higher-education ecosystems in India. This evaluation focuses on perceived usefulness, trust, explainability expectations, and institutional readiness for AI-assisted judicial tools, rather than predictive performance. Findings from the survey informed key design choices, particularly the emphasis on explainable AI and modular deployment. While large-scale retrospective evaluation on real-world court data remains part of future work, the current study establishes a methodologically grounded and ethically aligned foundation for AI-assisted judicial decision support in resource-constrained legal environments, with scope for integration into India’s evolving digital judiciary infrastructure.
Authors - Sanket Shah, Jenice Bhavsar, Bhumi Shah, Jishan Shaikh, Khevana Raval, Ekta Vyas Abstract - Dyslexia is a neurodevelopmental condition that impairs reading fluency and phonological processing across languages. Early identification in school settings remains difficult because the Dyslexia Assessment for Languages of India (DALI) assessment tool requires expert administration which makes it difficult to implement in practice. The latest developments in artificial intelligence allow researchers to evaluate reading patterns through inexpensive devices which people commonly use. The research presents a system framework that uses multiple methods to combine webcam-based eye-tracking with voice analysis and machine learning methods for early dyslexia detection. The system examines tabular gaze and speech features through gradient-boosted models while using convolutional neural networks to encode spatial gaze patterns which include a meta-learning layer for multimodal fusion. The proposed framework enables practical implementation through its web-based interface which connects to secure backend services, thus providing schools with a privacy-protected and scalable method to conduct dyslexia assessments and provide personalized learning assistance in their resource-limited classrooms.
Authors - Geethashree A, Surabhi M R, Varshitha H N, Vipul S, Vivek M R Abstract - The RISC-V Vector Extension (RVV) enables scalable data-parallel processing through a flexible vector length architecture, offers a standardized and scalable approach to vector computing. Derived from an analysis of existing RVV architectures, this paper presents a focused architectural study and implementation of a basic RVV-based vector extension. Unlike complex, high-performance designs, the proposed architecture prioritizes simplicity and clarity, implementing only essential vector arithmetic and memory instructions. The vector extension is integrated with a single-cycle scalar RISC-V core, and instruction decoding is implemented and verified at RTL level. Functional simulation confirms correctness of RVV instruction decoding. This work bridges the gap between theoretical RVV studies and practical step-by-step hardware implementation.
Authors - Lalitha R, Husna Sarirah Husin, Suriana Ismail, Nikitha S, Kavya Darshini S, Pooja M Abstract - The data from Tamil Nadu government MSME programs is a treasure trove, but the information is fragmented and scattered in different kinds of documents. Consequently, it becomes a task for both the public and the analysts to process the data and get important insights. The paper introduces LKD-RAG, an explainable hybrid retrieval-augmented generation (RAG) system that relies on LLMs and KGs to make natural language queries possible on the data of these schemes collected from different sources. In the initial phase, the LLM started autonomously to discover entities, relations, and attributes, which eventually led to the creation of structured triples that signify factual statements (subject-predicate-object). The knowledge represented by these triples was loaded into Neo4j, thereby producing a MSME Scheme KG that is specific to the domain. Also, a document embedding layer was set up with SentenceTransformer ("all-MiniLM-L6-v2") that made it possible to do semantic retrieval of supporting textual evidence. When a query is made, Gemini decodes the person’s inquiry, finds relevant KG subgraphs and text embeddings, and constructs a response that is grounded on the evidence. The subgraph that corresponds to the answer is shown to the user, so the user can check what knowledge the model is relying on for its reasoning. Thus, the process facilitates transparency and the use of explainable AI (XAI) in policy analytics. The results of the experiments indicate that the hybrid RAG model not only has the ability to generate factually accurate responses but also to provide interpretation through different Tamil Nadu MSME programs.
Authors - Krashn Kumar Tripathi, Sachin B. Jadhav Abstract - In digital world, cyber-attacks are becoming more sophisticated and popular. The conventional intrusion detection models are not adequate in challenging threat escapes. Importantly, the major reason for increasing demand in the networks, unauthorized access is increasing their interests in these areas. Various network environments and organizations are tackling numerous of attacks on their network at frequent times. Traditionally, various manual methods are used for intrusion detection such as packet and flow analysis, traffic log reviewers and monitoring the security. Nevertheless, the manual techniques for such type of the detections takes too much time and also the result obtained is not up to the mark, so due to this it is difficult to predict all types of attacks and intrusions for network security. To overcome these issues, several conventional researches have concentrated on intrusion detection models to offer effective security to the networks. Conversely, it results with accuracy and speed lacks. For enhancing the intrusion detection, research make use of a Deep Learning (DL) Unravelled Spatial Features in Multilayer Perceptron with Gradient Jacobian Matrix. Gaussian Activation is used to enhance the Intrusion detection system for an effective classification. In the proposed research work we are using the RT-IoT dataset and the final efficiency has been analyzed by using various parameters like overall correctness, actually correct, correctly identified by the model,and the balance between the both values of recall and precision (Harmonic Mean). Furthermore, the current work and the proposed model is developed to contribute to avoid the different cyber threats by timely identifying such type of intrusion in the networks.
Authors - Maulana Amirul Adha, Maulana Paramaditya Ananta, Bayu Suhendry, Ria Rahma Nida, Eka Dewi Utari, Nur Athirah Sumardi Abstract - The challenge of generating accurate and contextually complete mod-els and prompts in Model-Driven Engineering (MDE) using Large Language Models (LLMs) is based on the current limitations in understanding the complex structured data. The significance of this issue lies at the heart of modern software development where MDE has taken the lead to advance development in the field moving towards with the aim of automating manual processes. To increase this automation, the application of LLMs holds the potential to reduce the manual effort and reduce human error involved in the process. To address this, we pro-pose a context-based prompt generation framework that integrates the techniques of Retrieval-Augmented Generation (RAG) with LLMs such as GPT-4 and CodeLlama to produce prompts that are contextually accurate and sound. Along with these LLMs, tools like FAISS, LangChain, and PlantUML are also em-ployed to produce detailed and structurally accurate UML models and prompt to enhance MDE understandability. In summary, the proposed framework aims to improve the accuracy and completeness of model generation by providing a con-textually correct prompt with a high level of accuracy and enhances the interpret-ability and ability of trust in AI-generated artifacts, creating the way for more efficient, automated, and user-friendly MDE processes.
Authors - Pierre Buys, Tevin Moodley Abstract - This paper presents a real-time chessboard state detection system that leverages computer vision and deep learning to automate a digital representation of a physical chess game. Traditional digitization systems either require manual input or specialized equipment. However, the proposed system addresses this problem by capturing a chess game in real time through the use of a smartphone camera. Detected piece positions are mapped to standard board coordinates and translated into Forsyth-Edwards Notation (FEN), enabling seamless integration with existing chess engines for analysis and move suggestions. The system works by firstly localizing the chessboard via Canny edge detection as well as a Hough transform. Thereafter, multi-class object detection is addressed by developing a two-stage R-CNN model alongside a single-stage YOLO model, allowing for a comparative evaluation of their respective methodologies and performance. The described system achieves a localization precision of 98.77% per board coordinate, whilst the two-stage R-CNN and single-stage YOLO models achieve a piece detection accuracy of 83.62% and 99.47%, respectively.
Authors - Hardik Modi, Mayur Makwana, Sagarkumar Patel, Dharmendra Chauhan, Siddhi Patel, Dhara Soni, Malvi Patel Abstract - Early and accurate detection of brain tumors is a critical requirement in modern clinical diagnostics, as it directly affects treatment planning, disease prognosis, and patient survival rates. The rapid increase in the availability and complexity of medical imaging data has intensified the need for reliable computer-aided diagnosis (CAD) systems to assist radiologists in consistent and precise tumor identification. Among various CAD techniques, medical image segmentation plays a pivotal role in differentiating abnormal tumor tissue from healthy brain structures in diagnostic images. This paper presents an automated brain tumor detection framework based on medical image analysis, implemented using a MATLAB-based graphical user interface. The proposed system processes Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans through a structured processing pipeline that includes image acquisition, noise reduction, contrast enhancement, feature-based segmentation, and tumor region visualization. The segmentation methodology is designed to accurately localize tumor boundaries while minimizing false-negative detections, which is a crucial requirement for clinical decision-making. The developed interface enables interactive visualization of segmented regions, allowing efficient analysis without the need for extensive computational expertise. The proposed framework offers a user-friendly and computationally efficient platform that reduces reliance on manual interpretation and improves diagnostic repeatability across clinical environments. The novelty of this work lies in the seamless integration of automated tumor detection, structured segmentation techniques, and real-time visual interpretation within a unified MATLAB-based environment, providing a practical and accessible CAD solution without dependence on complex hardware or deep learning infrastructures. Experimental observations indicate that the system enhances analysis efficiency and supports medical professionals in making faster, more reliable, and time-effective diagnostic decisions.
Authors - Najera R. Umpar Abstract - Artificial Intelligence (AI), as a technology, has the potential to change the manner in which organizations are run in the world. However, small and medium-sized enterprises (SMEs) in the Philippines have unique limitations in the use of AI in running the business. The study aims to explore the perceptions of SME managers in the Philippines on the use of AI, with particular reference to the limitations and facilitators in the use of the technology in the business environment. In this study, the researcher interviewed five SME managers from different sectors, including retail, manufacturing, and service sectors. The researcher used thematic analysis to identify the commonalities in the decisions made by the SME managers on the use of AI in the business environment. The study revealed the perceptions of the SME managers on the use of AI in the business environment in the Philippines, with the limitations and facilitators in the use of the technology in the business environment. The study provides practical insights that can guide strategies aimed at strengthening AI readiness and responsible adoption among SMEs in the Philippines.
Authors - Nilay Shah, Darsh Pandya, Nisarg Patel, Rudra Shah, Umang Shah, Dhaval Patel, Priteshkumar Prajapati Abstract - Early and accurate detection of brain tumors is a critical requirement in modern clinical diagnostics, as it directly affects treatment planning, disease prognosis, and patient survival rates. The rapid increase in the availability and complexity of medical imaging data has intensified the need for reliable computer-aided diagnosis (CAD) systems to assist radiologists in consistent and precise tumor identification. Among various CAD techniques, medical image segmentation plays a pivotal role in differentiating abnormal tumor tissue from healthy brain structures in diagnostic images. This paper presents an automated brain tumor detection framework based on medical image analysis, implemented using a MATLAB-based graphical user interface. The proposed system processes Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans through a structured processing pipeline that includes image acquisition, noise reduction, contrast enhancement, feature-based segmentation, and tumor region visualization. The segmentation methodology is designed to accurately localize tumor boundaries while minimizing false-negative detections, which is a crucial requirement for clinical decision-making. The developed interface enables interactive visualization of segmented regions, allowing efficient analysis without the need for extensive computational expertise. The proposed framework offers a user-friendly and computationally efficient platform that reduces reliance on manual interpretation and improves diagnostic repeatability across clinical environments. The novelty of this work lies in the seamless integration of automated tumor detection, structured segmentation techniques, and real-time visual interpretation within a unified MATLAB-based environment, providing a practical and accessible CAD solution without dependence on complex hardware or deep learning infrastructures. Experimental observations indicate that the system enhances analysis efficiency and supports medical professionals in making faster, more reliable, and time-effective diagnostic decisions.
Authors - D.K. Chaturvedi, Tipu Sultan Abstract - A real-time operating system (RTOS) should be able to recover from interruptions. Since RTOS systems are used in safety-critical environments, this function is essential for ensuring system availability and reliability. However, while many of the current anomaly detection techniques can detect faults, they do not provide any means for recovery. Therefore, in this paper, I propose a self-repairing RTOS framework that utilizes reinforcement learning (RL) to automatically select the best course of action to take when an anomalous event arises. I propose a Q-Learning agent that learns to recover from six types of common faults, including: sensor degradation, stuck sensor, priority inversion, memory leaks, sporadic overloads, and task starvation. The framework is built on FreeRTOS, and the agent utilizes an 8-dimensional state space and the six different types of recovery options available for each fault. The overall success rate of the system was 99.2 % after 5,000 training episodes, with average success rates of 98.0 % and 99.9 % when handling individual faults. The RL agent completely prevented system crashes and returned the system to normal operation within an average of 0.06 ms after an interruption occurred. The training results provide strong evidence that the model learned to operate effectively and consistently, with its success rate improving from 97.0 % during early training stages to 100 % after training was completed. Therefore, this study demonstrates a practical, production-ready method to implement autonomous fault recoveries in RTOSs in automotive applications. To our knowledge, this is the first successful implementation of RL for autonomous, self-repairing behaviors in this area.
Authors - Rashmi Vipat, Priyank Doshi Abstract - Agriculture plays a vital role in ensuring food security, yet traditional crop selection and yield estimation practices often fail to account for complex interactions among soil, climatic, and environmental factors. Recent advances in machine learning (ML) have shown significant potential in addressing these challenges by enabling data-driven decision support for farmers. This paper presents a comprehensive review of machine learning–based crop recommendation and yield prediction techniques, focusing on their effectiveness in improving agricultural productivity and sustainability. The study analyzes various supervised and ensemble learning models applied to soil quality parameters such as nitrogen, phosphorus, potassium, pH, moisture, and climatic variables. Emphasis is placed on multimodal data integration, highlighting how the fusion of soil, weather, and remote sensing data enhances prediction accuracy. The review also discusses current limitations, including data scarcity, model generalization, and real-time deployment challenges, particularly in resource-con-strained farming environments. Finally, the paper identifies key research gaps and future directions toward developing robust, scalable, and intelligent agricultural decision-support systems.
Authors - Amit Kalita, Himashree Kalita, Manjit Kalita, Abhijit Chakraborty, Kalpita Dey, Prajukta Deb Abstract - The significance of M- Health platforms to promote health equity has reached critical levels as digitalization in the healthcare sector continues to grow post pandemic. M-Health platform utilization in developing countries like Bangladesh has unique challenges: inconsistent adoption of the digital healthcare system, thus leading to a suboptimal delivery of healthcare services to customers. Using blended models i.e., Expectation-Confirmation Model (ECM), UTAUT2, and the DeLone & McLean IS Success Model, with Training on Virtual Consultation Skills as the moderating variable, the study intends to examine the adoption intention of healthcare providers to continuously use M-Health Platforms for a myriad of services like virtual consultation, remote patient monitoring, electronic prescriptions, and e-health record keeping. This study used Partial Least Squares Structural Equation Modeling (PLS-SEM) to evaluate 898 responses. Social influence, relative advantage, regulatory clarity, digital literacy, trust in technology, and system quality, which collectively improve doctors’ satisfaction with virtual consultation platforms, were identified as important to the purpose of the study. The results offer concrete steps that healthcare providers, platform creators, and policymakers can take to build and improve a solid and dependable M-Health platform that encourages sustained partnership with physicians by alleviating resistance that physicians may have about M-Health platforms in comparable developing countries.
Authors - Yaram Srinivasa Reddy, Bairoju Sreelatha, Shankar Lingam. M Abstract - Knowledge from a resource-rich source domain is leveraged in traditional transfer learning to enhance classification in a relatively data-scarce target domain. However, the resulting target models often suffer from overfitting and limited generalization, which restricts their utility in noisy and resource-constrained environments such as remote sensing. To mitigate these limitations, this work introduces a nuclear norm–regularized teacher–student framework for hyperspectral scene classification. In particular, the student model is regularized with the nuclear norm to encourage low-rank parameter representations, improving robustness to ambient noise. Further, we introduce a relative reconstruction loss (RRL) metric to measure the robustness of the student model to environment noise. Trained on several benchmark datasets, the proposed student model attains up to 87.0% classification accuracy on the independent test sets of UC Merced and EuroSAT, while remaining substantially lighter than the teacher network. Further, relative reconstruction values are computed for different amounts of noise added in the input space; RRL saturate to values less than 1.0 for all the datasets, substantiating that the regularized student model is indeed robust. The competitive performance of the regularized student model compared to the teacher network, its lightweight design, together with RRL values less than one, suggest that the proposed student model can effectively be deployed in noisy and resource-constrained environments such as edge and fog devices.
Authors - Nagesh Sharma, Priyanka Yadav, Kavita Singh Abstract - An accurate determination of childhood malnutrition is necessary for preventive measures. This paper proposes a modified scoring scheme comprising two new elements: the Integrated Anthropometric Score (IAS) and the Hybrid Integrated Score (HIS). IAS uses six primary anthropometric measurements, such as BMI, MUAC, WHZ, WAZ, HAZ, and skinfold thickness, along with selected interaction terms that capture the non-linear connections between growth parameters. The weights are determined by regularized logistic regression, allowing the score to be transparent while still adapting to the statistical structure of the data set. To further stabilize the predictions, the HIS combines BAI, IAS, and a machine learning probability component to make the predictions robust in both synthetic and real-world samples. The models were developed using a synthetic dataset of 9,456 children and tested with five-fold cross-validation and a separate real-world dataset of 38 children. Interaction selection and regularization were performed to control noise sensitivity and avoid overfitting. The findings indicate that the IAS model outperforms BAI with its higher cross-validated accuracy (0.93) and strong performance on real data (0.95). The HIS stays consistent in accuracy across areas and indicates better generalization. The results suggest that by combining multidimensional anthropometric characteristics, interaction-aware modeling, and hybrid learning, a new, more adaptable, and clinically interpretable tool for predicting nutritional risk has been developed, surpassing traditional composite indices.
Authors - Mehak Mukesh Agrawal, Saumya Kumari, Gaganam H V S M Soma Sai, Ankit A. Bhurane Abstract - Most existing artificial intelligence (AI) based assistants are cloud-dependent and require constant internet connectivity. User data is sent to external servers for processing. While this data is often encrypted, it is prone to risks such as cloud security threats. Additionally, users need to be cautious not to share sensitive information. To overcome the aforementioned privacy and internet availability concerns, this paper proposes a completely offline, on-device, cross-device, and open-source system to ensure complete data privacy. The proposed system was tested with several datasets, including AI2 Reasoning Challenge, SQuAD 1.1, CoNLL 2003, GSM8K and StrategyQA to evaluate the closed-form question answering (QA), contextual understanding, named entity recognition, mathematical reasoning and truthfulness, respectively, and with five on-device large language models (LLMs), including Gemma3 1B, SmolLM 1.7B, Qwen2 1.5B, TinyLlama-1.1B, and Phi-2. The system achieved the highest score for closed-form accuracy of 1.0. Its performance on reasoning ranged from 0.01 to 0.23. Truthfulness scores ranged from 0.24 to 0.59. High F1 scores for named entity recognition ranged from 0.74 to 0.79, and contextual understanding scores ranged from 0.02 to 0.17 across the different LLMs. The average response time of the system on mobile and desktop devices was evaluated and observed to vary according to system capability and model size. The system allows users to choose between multiple wake words specific to the Indian context. The proposed system functions on limited RAM and in constrained resource environments.
Authors - Mahzuzah Afrin, Rajasree Das Chaiti, Gazi Tahsina Sharmin Jahin, M. M. Musharaf Hussain, Mohammad Shamsul Arefin Abstract - Reliable identification of pneumonia from chest radiographs plays a central role in supporting clinical decision-making and patient management. Although deep learning models have shown favourable results for automated diagnosis, most existing studies rely on fully supervised training and mainly evaluate performance using accuracy or ROC-AUC metrics. Such evaluations may fail to capture clinical decision reliability, particularly in imbalanced medical datasets. In this work, we examine the effectiveness of self-supervised learning (SSL) for chest X-ray pneumonia classification through a controlled empirical study. A contrastive pretraining strategy is used to learn image representations from unlabeled chest X-rays, followed by supervised linear evaluation. The SSL-pretrained model is compared with a fully supervised model trained from scratch under identical experimental conditions. Our experiments reveal that the supervised baseline attains a slightly higher ROC-AUC; however, this improvement comes at the cost of increased false positive predictions, leading to lower overall accuracy. In contrast, the SSL-pretrained model exhibits a distinct prediction pattern. It achieves higher accuracy and notably improved precision and F1-score, indicating more balanced and reliable predictions. Precision– recall analysis further demonstrates the advantage of SSL in reducing false positive decisions. In addition, Grad-CAM visualizations suggest that the SSL-pretrained model focuses on clinically relevant lung regions. From a clinical decision-making perspective, these results suggest that self-supervised learning offers tangible advantages for chest X-ray analysis when prediction reliability is prioritized. This distinction is especially relevant in clinical settings.
Authors - Prerna Agarwal, Pranav Shrivastava, Samya Ali, Sachit Dadwal, Shubh Om Yadav, Saquib Hussain, Kareena Tuli Abstract - Most existing artificial intelligence (AI) based assistants are cloud-dependent and require constant internet connectivity. User data is sent to external servers for processing. While this data is often encrypted, it is prone to risks such as cloud security threats. Additionally, users need to be cautious not to share sensitive information. To overcome the aforementioned privacy and internet availability concerns, this paper proposes a completely offline, on-device, cross-device, and opensource system to ensure complete data privacy. The proposed system was tested with several datasets, including AI2 Reasoning Challenge, SQuAD 1.1, CoNLL 2003, GSM8K and StrategyQA to evaluate the closed-form question answering (QA), contextual understanding, named entity recognition, mathematical reasoning and truthfulness, respectively, and with five on-device large language models (LLMs), including Gemma3 1B, SmolLM 1.7B, Qwen2 1.5B, TinyLlama-1.1B, and Phi-2. The system achieved the highest score for closed-form accuracy of 1.0. Its performance on reasoning ranged from 0.01 to 0.23. Truthfulness scores ranged from 0.24 to 0.59. High F1 scores for named entity recognition ranged from 0.74 to 0.79, and contextual understanding scores ranged from 0.02 to 0.17 across the different LLMs. The average response time of the system on mobile and desktop devices was evaluated and observed to vary according to system capability and model size. The system allows users to choose between multiple wake words specific to the Indian context. The proposed system functions on limited RAM and in constrained resource environments.
Authors - Roshani Tawale, Jayshri Todase, Manisha Bharati Abstract - Enforcement of helmet regulations and accurate vehicle identification remain essential components of intelligent traffic management systems. Conventional supervision approaches depend heavily on manual inspection, which is labor-intensive and unsuitable for continuous large-scale monitoring. This study presents an automated framework for helmet violation detection and number plate lo-calization using the YOLOv8 deep learning architecture [3]. The proposed system supports static image analysis, recorded video processing, and live-stream detection within a unified pipeline. Performance is assessed using precision, re-call, and mean Average Precision (mAP@50). Experimental findings demonstrate consistent detection reliability and validate the framework’s applicability for real-time traffic surveillance systems.
Authors - Gunjan Pareek, Rajiv Singh, Swati Nigam Abstract - This research examines the transfer learning deep learning models in multimodal human activity recognition based on wearable sensor data. Raw IMU signals are converted to Gramian Angular Field (GAF) images to improve the feature representation and tested on WISDM and PAMAP2 datasets of 18 activity classes. Five CNN models, namely VGG16, MobileNetV2, ResNet50, DenseNet121, and EfficientNetB0, are trained and evaluated in the same conditions and measured by classification accuracy, statistical significance, and computation efficiency. GAF representations are always better than raw signals. DenseNet121 and ResNet50 have 99% accuracy, VGG16 and MobileNetV2 perform competitively and EfficientNetB0 performs worse. Most of the differences in performance are statistically significant (p < 0.05).
Authors - Devang Rupesh Dalvi, Gaurav Suresh Malik, Abhishek Jairaj Kunder Abstract - Prompt engineering has emerged as an essential paradigm in leveraging desired behaviors from large language models (LLMs) without altering their parameters. Although the majority of the current literature has revolved around the introduction of novel prompt engineering strategies, there has been comparatively less emphasis on the contribution of the evaluation and optimization of prompts in concrete systems. In this paper, we offer a specialized review of prompt engineering from an evaluation/optimization centric viewpoint with a larger nod to conceptual developments and illumination rather than detailing the comparisons of approaches. Furthermore, we attempt to establish the concrete importance of prompt engineering via a real-life application, which resulted in improved performances in tasks through the process of prompt refinement and informal evaluations without the need to change the architecture and weights of the models. The paper will also introduce the deficiencies in prompt engineering in the realms of re-producibility, robustness, and the unavailability of standardized approaches in the aspect of concrete evaluations.
Authors - Reepu Abstract - This paper presents a hybrid diagnostic approach for an engine air-path benchmark characterised by environmental variability, limited labelled faults, and the need for reliable online decisions. The proposed method combines physics-guided residual features with datadriven temporal representation learning. Residuals derived from grey-box relations capture physically meaningful deviations, while a lightweight encoder extracts temporal patterns across operating regimes. To enhance robustness under changing ambient conditions, the model is explicitly conditioned on measured environmental variables and trained to favour stable representations across sessions. An open-set decision policy with calibrated rejection is incorporated to reduce misclassification when encountering unseen fault magnitudes or insufficient evidence. The method is evaluated under the official benchmark protocol using online processing constraints and standard metrics, including false alarm rate, detection rate, isolation rate, detection delay, and computational cost. Results show improved reliability compared to competitive baselines, with lower false alarms, higher detection and isolation performance, and stable behaviour across sessions. The approach remains computationally efficient and suitable for real-time deployment in practical diagnostic pipelines.
Authors - Zala Bhargavi Harshadbhai, Priyank D. Doshi Abstract - Brain tumor classification using MRI is very important for early diagnosis. While convolutional neural networks (CNNs) showed strong performance in medical image analysis, but transformer-based architectures have recently gained popularity because of their ability to model long-range spatial dependencies through self-attention mechanisms. Our work lines up two such models - Vision Transformer and Swin Transformer to see how each handles tumor spot-ting in brain MRIs from the BRISC2025 collection. Same training setup applied to both keep things balanced and evaluated on the official test split for ensuring fairness. The official test set showed that both ViT (99.17 ± 0.26%) and Swin (99.27 ± 0.13%) have nearly identical predictive performance. Despite similar outcomes, their inner workings differ sharply behind the scenes. Swin Trans-former have approximately 40% and inference cost by nearly 50% compared to ViT while maintaining similar accuracy. The study provides insights into the performance and efficiency of trade-offs between global and hierarchical trans-former architectures in medical imaging applications.
Authors - Eduardo J. Lopez, Angelin Y. Alarcon, Marco Riofrio-Morales, Jose E. Naranjo Abstract - Higher education institutions often face challenges with fragmented student services and the reliance on manual workflows. Although Large Language Models (LLMs) present opportunities for service integration, their application in administrative contexts introduces specific risks, notably “transactional hallucinations” and the potential for unauthorized system actions. To explore potential mitigations for these challenges, this paper presents SUEMas as a proposed alternative: a configuration-driven, multi-agent ecosystem designed to help regulate LLM interactions within university domains. The proposed framework implements a Dynamic Tool Registry aimed at enforcing phase-aware tool exposure, alongside a Closed-World Action Gating mechanism intended to restrict sensitive operations to verified session candidates. Initial evaluations of this proposal indicate that SUEMas can support consistent policy enforcement, achieving high recall in RAG-based tasks under test conditions. Furthermore, the system maintained strong multi-turn coherence while keeping latency low, suggesting that structured security governance might practically coexist with conversational flexibility.
Authors - Surya Anugrah, Dwi Handarini, Eka Septariana Puspa, Windy Permata Suyono, Sabo Hermawan, Irima Rahmadani, Nazwa Febriyani Abstract - This paper presents the design, modelling, fabrication flow and analysis of multi-functional photonic crystal (PhC) nano-cavity sensors integrated with cantilever beams and diaphragms on a Silicon-On- Insulator (SOI) platform. The device architecture leverages defect-based two-dimensional PhC nano-cavities to obtain high quality (Q) factors and small mode volumes, while mechanically compliant structures transduce force and pressure into measurable optical resonance shifts. Biochemical and chemical detection is achieved via refractive-index based transduction and temperature sensing via thermo-optic effects. A machinelearning (ML)-assisted calibration and sensitivity enhancement framework is proposed to improve resolution and compensate for fabrication tolerances. Fi-nite-difference time-domain (FDTD) optical simulations and finite-element method (FEM) mechanical simulations validate device performance. Noise analysis, limit-of-detection (LOD) calculations, and comparison against state-of-the-art devices are provided. The architecture is CMOS-compatible and suitable for lab-on-chip photonic sensing applications.
Authors - Raina Thakkar Abstract - This work investigates the Evolutionary Matrix Factorization (EMF) model proposed in Evolving Matrix-Factorization-Based Collaborative Filtering Using Genetic Programming. The EMF model employs genetic programming to optimize the matrix product function used in traditional Matrix Factorization recommender systems. The primary objective of this project is to develop a GP-based matrix factorization model that outperforms EMF in prediction accuracy. To facilitate comparison, we first reproduce the EMF model’s results using standardized metrics. Subsequently, we design and implement a custom data structure for GP, along with the full pipeline for reproducible model execution. Finally, we analyze the performance of our proposed model and compare it against EMF, demonstrating its improvements in prediction precision.
Authors - Srikumar Nayak Abstract - Anti–money laundering (AML) monitoring is difficult because suspicious behavior is rarely a single abnormal transaction; it is usually a short sequence of linked transfers across many entities. Standard tabular models miss these links and often produce alerts that are hard to justify during review. To address this, we propose GraphAML-X, a practical pipeline that turns raw transaction logs into a knowledge graph and produces case-level evidence for analysts. The main issue we target is fragmented identity (the same actor appearing under noisy identifiers) and weak case explanations (high scores without clear paths or rule triggers). GraphAML-X first performs entity resolution to merge duplicate accounts and identifiers using rules plus a learned match score, so the graph represents real actors. It then learns temporal graph embeddings from the timeordered transaction network to capture multi-hop laundering patterns such as rapid circulation and hub–spoke behavior. Finally, it combines graph risk with rule-hybrid case reasoning: regulatory red-flag rules propose candidate alerts, and the graph model ranks them while emitting audit-ready evidence (top subgraph paths, key neighbors, and triggered rules) and alert-volume control via a calibrated threshold. Using the Micro-AmlSim dataset, GraphAML-X achieves an AUC-ROC of 0.982 and an AUC-PR of 0.741, improving the strongest baseline GNN by +0.034 AUC-PR. At a fixed alert rate of 1% of transactions, it attains 0.686 recall while reducing false alerts by 18.9% compared to rule-only screening. These results show that GraphAML-X can improve detection while producing reviewable and policy-aligned AML cases.
Authors - Nguyen Ngoc Dung, Doan Van Thang Abstract - Memory encryption is a key security requirement for modern computing systems, addressing vulnerabilities between CPUs and main memory. Traditional storage encryption is insufficient for protecting volatile data in RAM, which remains exposed to bus sniffing, cold boot attacks, and side-channel exploits. This paper therefore systematically reviews memory encryption techniques focused on hardware-based solutions like Intel Total Memory Encryption (TME), Multi-Key TME, and AMD Secure Memory Encryption, which provide robust protection while minimising performance overhead. The paper also explores integrity protection via Merkle trees and side-channel countermeasures against Differential Power Analysis and Simple Power Analysis attacks. Additionally, granular memory encryption methods for multi-tenant environments are discussed, highlighting their role in isolating sensitive data across security domains. By examining security guarantees and performance trade-offs, we emphasise the necessity of efficient memory encryption to safeguard against evolving threats targeting the CPU-memory interface, providing hardware engineers a foundation for ensuring data confidentiality and integrity.
Authors - Chaitrasree S, Srinidhi G A Abstract - The Research will shows how app-based omnichannel ICT-enabled marketing shapes customer engagement and service loyalty in the culinary hospitality industry within an urban emerging-market context. Drawing on an ICT-centered and service-systems perspective, the research conceptualizes mobile applications as integration hubs that coordinate multiple service modes—delivery, dine-in, takeaway, and drive-thru—into a unified customer experience. The study approach was using a quantitative design with a cross-sectional survey of 150 chain-restaurant mobile app users in Jakarta. Structural Equation Modeling (PLS-SEM) were used to analyze the data. The results shows that app-based omnichannel ICT-enabled marketing has a positive and significant effect on customer engagement and service loyalty. Customer engagement also demonstrates a positive effect on service loyalty and mediates the relationship between omnichannel ICT-enabled marketing and loyalty, partially. These findings suggest that perceived ICT integration quality, reflected through consistency, seamlessness, and coordination across service modes, plays a pivotal role in translating technology-enabled service design into relational outcomes. This study contributes to the ICT literature specially in hospitality by extending omnichannel research beyond a marketing-centric perspective and highlighting the strategic role of integrated mobile app infrastructures in high-frequency culinary service environments. Based on a managerial standpoint, the results emphasize the importance of treating mobile applications as core service platforms that support engagement-driven loyalty in chain-restaurant operations.
Authors - Pei-Yi Hao Abstract - Digital transformation is reshaping education systems worldwide, with significant implications for rural and underserved regions. In India, initiatives aligned with the National Education Policy (2020) have promoted online learning platforms, digital classrooms, and technology-enabled teacher training to enhance access, equity, and quality in education. However, rural schools continue to face structural challenges such as limited infrastructure, digital divides, and inadequate teacher preparedness, which influence the effectiveness of digital integration.This conceptual paper examines the transformation of rural education in India from traditional teacher-centred classrooms to digitally enabled learning ecosystems. Grounded in Constructivist Learning Theory, the Technology Acceptance Model (TAM), Diffusion of Innovation Theory, and the TPACK framework, the study proposes an integrated conceptual model linking digital infrastructure, pedagogical innovation, and teacher competence to improved access, engagement, and learning outcomes. The paper argues that digital transformation represents a systemic pedagogical and institutional reform rather than a mere technological shift. Its success depends on inclusive infrastructure development, sustained teacher capacity building, and context-sensitive implementation in rural settings.
Authors - Aryan Dholi and Malathi P Abstract - Smart contract vulnerabilities have continuously been a major source of threat to blockchain security, with billions of dollars being accounted for losses every year. This review paper delves into over 15 different detection methods utilizing static analysis, dynamic monitoring, machine learning, and hybrid approaches. Sustainability metrics such as the Green Detection Score and the Energy Efficiency Index are first proposed by us to gauge the environmental cost in relation to the accuracy. From our review of 28 papers, we conducted research studies to points out a significant discovery: transformer models reach 0.91 F1-score but use 1,475× more energy than static analyzers. Hybrid approaches present a viable compromise with 0.89 F1-score and 62% energy savings. We thus offer deployment advice, sustainable architecture templates, and a 2030 roadmap for green blockchain security.
Authors - Sohesh Gandhe, Aditya Shirwalkar, Prathmesh Jomde, Shreyash Dhavale, Anil M. Bhadgale Abstract - Automatically generating Unified Modeling Language (UML) diagrams from unstructured software requirements remains one of the persistent challenges in modern software engineering. This paper introduces an intelligent project management framework that transforms client-provided requirement documents into accurate UML diagrams with minimal human intervention. Our system leverages Optical Character Recognition (OCR) to extract text from various document formats, employs a fine-tuned model for intelligent prompt synthesis, and utilizes a fine-tuned CodeLLaMA 7B model trained on prompt-to-MermaidJS code mappings. The generated diagrams—including sequence diagrams, flow charts, and Gantt charts—are rendered in real time through an integrated Mermaid Live Editor, providing immediate visual feedback within the project management interface. The experimental evaluation demonstrates substantial improvements in automation efficiency, reduced manual modeling effort, and improved consistency in UML generation. Our approach bridges the gap between natural language requirements and formal system design artifacts, offering a practical solution for automated software documentation and project planning at scale.
Authors - Trupti Shripad Tagare, K.L.Sudha, Nagendra Kumble, Sanketh T S, Belliappa M Abstract - The current developments in the design of aircraft have remarkably improved their overall performance. The parameter Rate of Climb (RoC) plays a very vital role in planning the trajectory of the flight, optimum fuel utilization and flight safety and is of significance for both technicians and pilots. The factors affecting RoC are weight of the aircraft, its design, and the atmospheric state. In this study, the estimation of real time RoC using predictive AI and deep learning is presented. The model is trained on real time flight data collected from Radome Technologies, Bengaluru. The parameters like drag, thrust, weight, climb angle and airspeed are provided as inputs to the model after preprocessing. The results show that the system achieves an enhanced predication accuracy with R2 of 0.9396, Root Mean Squared Error (RMSE) of 861.69 feet per minute and Mean Absolute Error (MAE) of 659 feet per minute. The efficiency and capability of several aircrafts can be measured and analysed using the rate of climb. The work greatly finds its important role in ground-based flight planning tools and in onboard decision-support systems. The fuel requirements for the aircraft can be reduced significantly by setting an optimum ROC. This will result in reduced costs and sustainable solutions. This work contributes to overall performance and safety, as the aircraft will maintain the optimal ascent using AI driven climb profile optimization.
Authors - Glenn Erick Zambrano Estupinan, Maria Genoveva Moreira Santos Abstract - Virtual Reality (VR) has gradually become an increasingly relevant technological tool in higher education, not only because of its innovative nature, but also due to its ability to create immersive experiences capable of capturing students’ attention and generating meaningful emotional responses. In this con-text, the aim of this study was to analyze the immediate emotional impact produced by a virtual reality experience on university students, using data mining techniques to identify patterns within the collected responses. The research followed a quantitative approach, with a descriptive–correlational and cross-sectional design, and included the participation of 305 students from the Faculty of Computer Sciences at the Technical University of Manabí. Each participant engaged in an immersive experience lasting approximately five minutes using the Meta Quest 2 device. After the activity, a Likert-type questionnaire, with a scale ranging from 1 to 5, was applied in order to evaluate variables such as perceived immersion, realism of the environment, level of attention, emotional interaction, empathy, and enthusiasm before and after the experience. The collected data were subsequently analyzed through exploratory and correlational analysis, as well as through several data mining techniques, including Principal Component Analysis (PCA), k-means clustering, and Apriori association rules. Overall, the results suggest that the virtual reality experience generated predominantly positive emotional responses among the students.
Authors - N. Revathy, V. Latha Sivasankari, Nikileshwar V, Surendhiran G, Abijith M, Sheik Mohamed S Abstract - Enterprise networks face escalating cyber threats as cloud, IoT, and remote work adoption expand attack surfaces. Traditional signature-based detection and manual response suffer average breach detection intervals of 287 days, failing to scale against rising alert volumes [9]. CyberSentinel addresses this through an autonomous pipeline processing Windows Security Event Logs: Isolation Forest anomaly detection on engineered behavioral features, large language model (LLM) threat explanations via local Ollama inference, and automated remediation including account deactivation, process termination, and firewall adjustment. A Flask web dashboard provides real-time threat visualization. Evaluation across 72 hours on a controlled Windows 10 Enterprise testbed with 28 injected anomalies confirms an F1-score of 0.78, 84.2% remediation success, and mean end-to-end latency of 24.7 seconds. The modular Python architecture enables fully autonomous operation on standard Windows hosts without dedicated SOC infrastructure.
Authors - Palgulla Rangaswami Reddy, Palla Maheswara Rao, Gogineni Hari Prasad, Guthikonda Akhila, T.V. Sai Krishna Abstract - The implementation and design of a covert communication channel that embeds hidden information within TCP/IP packet headers rather than within the actual payload of the packets is presented as a project. This is different than traditional embedding methods (steganography), which typically embed data into multime dia files, in that steganography in this case utilizes header fields that are not cur rently in use or can be modified so that TCP/IP packets can transmit hidden data. The fields that are used to transmit hidden data are the IP Identification Field, TCP Sequence Number, TCP Acknowledgment Number, and TCP Window Size. The sender module encodes and generates packets, and the receiver retrieves packets, extracts encoded bits, and reassembles data from the encoded bits found in the packets. The integrity of the data is verified using a checksum (SHA-256) and packet loss is reported. The lack of a payload will further enhance the stealth various data transmission methods may enjoy as it will circumvent conventional intrusion detection techniques (which primarily examine the payload data within packets). This project will demonstrate the ability to use this or similar covert communication channels to implement covert communication systems. In addi tion, covert communication channels can be used for different types of files and demonstrate the security and educational value of covert channel research in net work security.
Authors - Busrat Jahan, Kevin Osei-Onomah, Mansi Bhavsar, Hermela Dessie, Apu Chandra Bhowmik Abstract - In the global health sector, Diabetes is a major concern which needs accurate and effective models for early prediction. This work is quantitative re-search work. The dataset was collected from CDC Diabetes Health Indicators, and we used Light Gradient Boosting Machine (LightGBM) model for predicting diabetes. Since this research work is binary classification-based work, in our data preprocessing stage, we used Synthetic Minority Oversampling Technique (SMOTE) for controlling class imbalance and for feature selection we used Chi-square test to improve the model performance. The proposed LightGBM model showed its ability to recognize complex correlation between diabetes-related health indicators with the training accuracy of 92% and a ROC-AUC score of 0.97 on the test dataset. Overall, the findings highlight that predictive accuracy is significantly improved after applying both imbalance data controlling and most correlated feature selection techniques.
Authors - Ruby Bisht, Amit Kumar Uniyal Abstract - Digital transformation is reshaping education systems worldwide, with significant implications for rural and underserved regions. In India, initiatives aligned with the National Education Policy (2020) have promoted online learning platforms, digital classrooms, and technology-enabled teacher training to enhance access, equity, and quality in education. However, rural schools continue to face structural challenges such as limited infrastructure, digital divides, and inadequate teacher preparedness, which influence the effectiveness of digital integration.This conceptual paper examines the transformation of rural education in India from traditional teacher-centred classrooms to digitally enabled learning ecosystems. Grounded in Constructivist Learning Theory, the Technology Acceptance Model (TAM), Diffusion of Innovation Theory, and the TPACK framework, the study proposes an integrated conceptual model linking digital infrastructure, pedagogical innovation, and teacher competence to improved access, engagement, and learning outcomes. The paper argues that digital transformation represents a systemic pedagogical and institutional reform rather than a mere technological shift. Its success depends on inclusive infrastructure development, sustained teacher capacity building, and context-sensitive implementation in rural settings.
Authors - Ruby Bisht, Amit Kumar Uniyal Abstract - The rapid growth of Information and Communication Technologies (ICT) has profoundly altered educational systems by redefining teaching practices, institutional processes, and professional expectations. Within the broader context of sustainable development and smart education, ICT has emerged as an important facilitator of efficiency, accessibility, and innovation. This paper presents a conceptual analysis of how ICT can contribute to sustainable development through its influence on teachers’ work–life balance and job satisfaction in ICT-enabled learning environments. While ICT adoption has the potential to enhance instructional flexibility, autonomy, and efficiency, excessive digital connectivity, intensified workload, and blurred work–life boundaries may adversely affect teachers’ well-being. The paper identifies work life balance as a key mediating factor linking ICT use to job satisfaction and long term professional sustainability. Furthermore, the study situates teachers’ well being within the broader framework of sustainable development, emphasizing its relevance to Sustainable Development Goals such as SDG 3 (Good Health and Well-Being), SDG 4 (Quality Education), and SDG 8 (Decent Work and Economic Growth). The analysis underscores the need for human-centred, policy-driven, and ethically oriented ICT integration strategies that prioritize teacher well-being alongside technological advancement. The paper contributes to the discourse on sustainable and intelligent education systems by highlighting that the long-term effectiveness of ICT-driven educational transformation depends on balanced digital practices that support teachers’ work–life balance and job satisfaction.
Authors - Vasumathi R, Kalpana Y Abstract - Graduate communication competency gaps represent a critical barrier to the workforce readiness in the Indian higher education, yet existing assessment infrastructure measures a credential completion rather than the skill trajectories over time. This paper presents a LSTM-CDSF (Long Short-Term Memory Communication Demand and Skill Forecasting), a temporal deep learning based framework that predicts the future communication skill demand from the sequential monthly assessment records and also quantifies per skill gaps against the industry benchmarks. The framework operates on a synthetic dataset of 240 students observed over a period of 18 months calibrated to published NASSCOM and India Skills Report statistics. LSTM-CDSF achieves a Mean Absolute Error of 1.468, RMSE of 1.837, MAPE of 2.61%, and R² of 0.9249 on a held-out test set of 480 sequences, demonstrating consistent performance improvements over the Linear Regression, ARIMA, and a naïve baseline across all the evaluated metrics. Gap analysis reveals that the Digital Communication (gap: 25.4 points) and the Intercultural Communication (gap: 23.5 points) requires the most urgent curriculum interventions.
Authors - M. Kamaraju, B. Rajasekhar, V.N.V.R. Karthik, V.N.L. Mahima, Y.H.V. Satya Narayana, R. Pujitha Abstract - This manuscript presents a dedicated Application-Specific Integrated Circuit (ASIC) architecture purpose-designed for computing eigenvalues of two-dimensional square matrices in resource-constrained embedded systems. The fundamental challenge motivating this work stems from the computational intensity of eigenvalue decomposition in digital signal processing, robotics control systems, and embedded analytics, where conventional software implementations incur unacceptable latency and power overhead. The proposed solution lever-ages the closed-form algebraic solution inherent to 2×2 matrices, eliminating iterative numerical methods and their associated performance penalties. Our design employs a direct characteristic-equation approach mapped onto dedicated arithmetic circuits including parallel multipliers, adders, and a specialized square-root computation unit implementing the non-restoring digit-re-currence algorithm. The Verilog RTL synthesized using Cadence Genus in a 180 nm CMOS standard cell library yields a compact silicon footprint of 1,703 square micrometers utilizing 196 standard cells, with measured power dissipation of 0.5738 milliwatts at 100-megahertz operation. Timing closure is achieved with positive slack under worst-case process-voltage-temperature conditions. The high dynamic-to-static power ratio of 98.66 percent to 1.34 percent indicates activity-dominated power behavior, confirming successful implementation of low-leakage design principles. These metrics demonstrate that the proposed architecture constitutes an effective hardware acceleration solution for eigenvalue computation in battery-powered and always-on applications where conventional approaches prove infeasible.
Authors - Arpita Choudhury, Pinki Roy, Sivaji Bandyopadhyay Abstract - Modern agriculture faces several challenges such as uncertain crop selection, inefficient fertilizer usage, and changing soil conditions. To address these issues, this research proposes an integrated AI/MLbased system that combines crop recommendation, fertilizer recommendation, and time-series prediction. The system utilizes IoT sensor data, including soil nutrients (N, P, K) and environmental parameters such as temperature and humidity, to support data-driven decision-making. Random Forest models are used for crop and fertilizer recommendation, while an LSTM-based model is applied for predicting future soil conditions using time-series data. Basic preprocessing techniques are used to ensure data quality, and results are presented through a simple and user-friendly dashboard. Experimental results demonstrate strong performance, with 96% accuracy for crop recommendation and reliable prediction trends for time-series forecasting. Designed for offline use with minimal computational requirements, the system is suitable for deployment in rural and resource-constrained environments, highlighting the practical role of AI/ML in modern precision agriculture.
Authors - Sharayu Mirasdar, Mangesh Bedekar Abstract - Ayurveda, India's ancient system of medicine, is full of inter-connected knowledge about diseases, their symptoms, herb and formulation (compounds). However, texts such as Charaka Samhita are mostly unstructured and cannot be readily analysed computationally. This work presents AyurKOSH which is a machine-readable, high-quality Ayurvedic dataset that is designed as a Knowledge Graph (KG) in order to support Artificial Intelligence driven research. The dataset is represented as subject–predicate–object triplets, which enables semantic interoperability, graph traversal, and multi-hop inferencing across entities. The dataset is designed by following schema-driven ontology which standardizes relationships between various nodes such as diseases, symptoms, pharmacological attributes, and compound formulations. DB Schema ensures consistency and computational tractability. AyurKOSH has the structured data of diseases and related symptoms, drug preparations, herbs and the detailed pharmacological properties are Rasa, Guna, Virya, Vipaka, Karma. The graph structure shows real-world biomedical network characteristics such as high sparsity and low average degree, which makes it suitable for embedding-based learning, graph neural networks, and explainable AI frameworks. Moreover, there is botanical metadata and herb-substitution relationships added for the prediction of synergy and repurposing of drugs. The dataset facilitates applications in biomedical NLP, and automated reasoning systems and clinical decision assistance, and pedagogy in integrative medicine. AyurKOSH became available for academic and non-commercial research under CC BY-NC-SA 4.0 license.
Authors - Liz Huancapaza Hilasaca, Maria Cristina Ferreira de Oliveira, Rosane Minghim Abstract - The abstract of the study emphasizes the thorough discussion of cussword usage in Hollywood films over a period of thirty five years, from 1990 to 2025, particularly in genres such as Action, Comedies, and Romances. On the basis of a carefully selected dataset of cusswords from Kaggle along with a considerable subtitle file dataset (.srt), the results have been obtained to determine whether profanity has been used over the years with an appropriate level of intensity in the respective genres of films.
Authors - Lanja Azeez Abdalqadir, Aram Mahmood Ahmed, Rozha Kamal Ahmed, Dirk Draheim Abstract - This study explores advanced metaheuristic optimization algorithms to improve smart home energy management under constrained electricity supply, aiming to reduce costs and enhance energy efficiency. It addresses challenges such as dynamic pricing and unstable supply, particularly common in developing regions. Five algorithms—Particle Swarm Optimization (PSO), Bat Algorithm (BAT), Fitness Dependent Optimization (FDO), Marine Predators Algorithm (MPA), and Single Candidate Optimization (SCO)—are evaluated, along with enhanced versions of MPA, FDO, and SCO incorporating Lévy flight and Oppo-sition-Based Learning (OBL). OBL improves exploration and exploitation in FDO and MPA, while Lévy flight enhances SCO’s ability to escape local optima. A novel cyclic rebounding technique is introduced to manage appliance sched-ules exceeding 24-hour limits. Tested across three scheduling scenarios, results show that MPA-OBL consistently achieves the lowest energy costs. Overall, the proposed enhancements significantly improve energy optimization in supply-constrained environments.
Authors - Purva Trivedi, Arun Parakh, Shurbhit Surage Abstract - Awareness regarding consumer sentiments will benefit a business en tity and/or a company in making their marketing strategies more effective and engaging in the current digital marketing context. In traditional marketing sce narios, since there is a lack of actual emotional aspect in expressing views in real time contexts, it has always been challenging for a business to perform a signifi cant adjustment in their marketing campaigns and achieve a greater success rate. The proposed idea focuses on AI and ML-based approaches for sentiment analy sis in digital marketing. The framework is made up of seven core steps: data collection, preprocessing and data cleaning, sentiment analysis models, feature extraction and model train ing, sentiment classification and analysis, insights and decision-making, and ap plication in digital marketing. From social media to e-commerce reviews to online discussions, consumer sentiment data comes from many digital sources. The text for analysis is standardized, and noise is cleaned in data prepara tion. Then, apart from other artificial intelligence-based sentiment classification models, sentiments are classified as positive, negative, or neutral using lexicon based, machine learning, and deep learning approaches. The learned knowledge enables businesses to react dynamically to consumer sentiment, target advertise ments, and adjust marketing strategies. Businesses will be able to conduct more profitable promotions, communicate with customers better, and monitor real-time sentiment through this AI-driven sentiment analysis platform. The paper emphasizes the benefit of incorporating artificial intelligence in decision-making within digital marketing, even in ad dressing issues like ambiguous sentiment expression management and multi-lan guage data. This paper provides a strategic way towards maximum customer in teraction and brand loyalty and also emphasizes the need for sentiment analysis that is sustained by available data in modern digital marketing.
Authors - Soumyadeep Basak, Shubham Sahu, Sankur Kundu, Ankita Ray Chowdhury Abstract - Hyperspectral image (HSI) classification requires effective modeling of high-dimensional spectral signatures and fine-grained spa tial structures while maintaining computational efficiency for real-world deployment. Although recent Transformer- and state-space-based ap proaches enhance long-range dependency modeling, they often introduce substantial architectural complexity and computational overhead. To ad dress these challenges, we propose MF-HSINet, a lightweight dual branch framework that enables adaptive spectral–spatial fusion via se lective state-space modeling. The spectral branch captures inter-band de pendencies, the spatial branch extracts local structural patterns, and the proposed Mamba-Enhanced Attention Fusion (MAF) module integrates these complementary representations through selective state updates, cross-attention, and adaptive gating to achieve pixel-wise feature balanc ing. This design preserves discriminative local details while strengthen ing global contextual modeling with reduced parameter complexity. Ex tensive experiments on nine benchmark hyperspectral datasets demon strate that MF-HSINet achieves competitive and consistent performance in terms of Overall Accuracy, Average Accuracy, and Kappa coefficient, while offering improved efficiency and inference speed, making it suitable for practical and resource-constrained HSI applications.
Authors - N. Revathy, Tamilmani M, Naveena P, Mariya Nisha S, Mega varshini V, Karthik B Abstract - Virtual Learning Environments (VLEs) are commonly evaluated through expert-driven frameworks that lack reproducibility and objective prioritization of defining features. This study proposes a data-driven framework integrating a Systematic Literature Review (SLR) and the iKeyCriteria method to identify and logically classify core VLE characteristics. A corpus of peer-re-viewed studies was analyzed and divided into VLE-focused (P) and contrastive non-VLE (Q) contexts. Criteria extraction and validation were conducted using tfidf (Term Frequency-Inverse Document Frequency) weighting and Boolean logical matrices to determine necessary and sufficient conditions. Results indicate that structured delivery of learning materials (91.5% in P vs. 12.7% in Q) and shared collaborative workspaces (82.1% vs. 18.2%) function as sufficient but not necessary discriminators of VLEs. In contrast, self-assessment and summative assessment appear frequently across both contexts and are therefore non-distinctive. The proposed framework provides a reproducible and bias-reduced mechanism for distinguishing defining VLE features, bridging systematic review methodologies with logical condition analysis. These findings support evidence-based prioritization in the design and evaluation of digital learning systems and contribute to advancing objective classification approaches in educational technology research.
Authors - Tegawende Brigitte KIENTEGA, Sadouanouan MALO Abstract - Navigation of mobile robots using GPS is widely available but use of GPS is sometimes either costly, not suitable for security reason, not available in indoor environments, or underground operational fields. This work provides a greedy method of path planning for a mobile robot from a starting point to the given destination point in a GPS-denied field where a set of access points (AP) are deployed randomly. Using these APs, the robot is able to calculate its current position at any moment as well as it chooses the next position to move further towards the destination. An efficient algorithm is designed to guide the robot to reach to its destination successfully taking into account that all the holes are convex, if exists within the field of interest. An analysis of the deployment strategy of the APs is provided in order to guarantee the successful path planning by the robot without backtracking any sub path.
Authors - Ambati Abhinavya, Jarupula Sunitha, Raparthy Navya, Rama Valupadasu Abstract - Internet of Things (IoT) devices are growing in domains because of their reliability and efficiency in monitoring, real-time detection and automated support. However, these IoT systems have also introduced security challenges. These devices are vulnerable to cyber threats, where attackers exploit weak points in the system to steal sensitive information. One of the attacks is the Distributed Denial of Service (DDoS) attack, which disrupts services by overwhelming systems and making them inaccessible to legitimate users. IoT devices are resource-constrained, so reducing feature dimensionality is essential to lower computational overhead and complexity. IoT devices generate data for detecting cyber-attacks, but sharing such data across organizations raises privacy concerns. To address these challenges, the proposed approach is designed in two phases. In the first phase, a hybrid feature selection technique using mutual information, permutation feature importance, and Greedy wrapper-based feature selection with cross-validation is applied to extract relevant features. In the second phase, Federated Learning (FL) is applied to train the model without sharing raw data among clients. Within the FL framework, Random Forest (RF) algorithm is utilized for training due to its robustness and classification capability. The proposed model is evaluated under two data distribution scenarios: mildly non-IID and strongly non-IID conditions. Experimental results demonstrate that the model achieved an accuracy of 99.69% in a mildly non-IID scenario and 98.36% under strongly non-IID conditions, highlighting the effectiveness and reliability of the proposed framework for secure IoT-based DDoS attack detection.
Authors - Kalidasu Lochani Krishna Priya, Nupur Ajit Kale, Apeksha Pandurang Mujumale, Anagha Vijaysinha Rajput Abstract - The large online data consist of duplication and plagiarized contents. Due to Artificial Intelligence, data generation has become very easy. But, it may also lack an ethical data generation process. Hence, there is a need of validating plagiarism free data for authentic usage. In this research work, authors focus on word-level plagiarism detection methods in Natural Language Processing. The proposed method uses a comparative analysis of cosine similarity, Euclidean distance and Manhattan distance methods for word-level plagiarism detection for different n-gram sizes. The inculcation of n-gram size improved the accuracy compared to unigram based methods. The experimental results of the cosine similarity method outperform Euclidean and Manhattan distance methods by achieving an average accuracy range of 88 % to 92 % and 75 % to 80 % for direct plagiarism and lightly paraphrased text respectively. The future work is to identify reused images and visual contents.
Authors - Chinmayee Padhy, Himansu Mohan Padhy, Pranati Mishra, Nabin Kumar Nag Abstract - Establishing an institution's excellence requires measuring their innovation and research accomplishments. Tracking, verifying, and evaluating innovation and research output in an efficient manner is currently constrained by a lack of efficient reporting systems and disorganized methods of obtaining the necessary data. The creation of InnovateHub, a web-based, secure, scalable, and cloud-based platform that provides a centralized system for analysing, managing, and visualizing research and innovation throughout the world's education sector. The InnovateHub provides a central location where a single point of access can be used to collect and process all types of innovation and research information via an effective system; an interactive dashboard and analytical visualisation allows users easy access to relevant information. InnovateHub provides a role and permissions-based access control mechanism to preserve the data privacy and accountability of Administrators, Faculty, and Students. InnovateHub also supports Multi Factor Authentication (MFA) using JSON Web Tokens (JWT) for multiple layers of security and verification of user identity as well as One Time Passcode (OTP) confirmed through email, and uses cryptographic hashing to provide a form of security for storing documents and provides a biometric face-based verification system (i.e., facial recognition) to authenticate a user during critical submission phases. Automated certificate generation and contribution recognition mechanisms at InnovateHub provide additional visibility into, and motivation for, users' contributions to the platform. Utilizing the MERN Stack and AWS for Hosting of MERN Stack: Utilizing the MERN Stack (MongoDB, Express, React, Node.js) & AWS to Host a MERN Stack Application Innovative Hosting Solutions by AWS Include Amazon EC2 Instances to Host Both the Application Back End as Well as Application Front End Services and Amazon S3 for Secure and Scalable Storage of Research Document & Certificate Generation. Experimental Deployment Indicates Reliable Operation, High Availability and Secure Handling of Data During Real Time Utilization within the Loss Prevention Environment. Innovate Hub Provides Real Time Analytics, Secure Verification & Cloud Scaleability for Institutional Research Governance and the Development of a Data Driven Platform of Continuous Innovation and Growth through the Development of a Data Driven Innovation Platform.
Authors - Pranav Rao, Pranav S Acharya, Rishika Nayana Naarayan, Shreya M Hegde, Pavan A C Abstract - The rapid expansion of cloud computing, Internet of Things (IoT), 5G networks, and distributed enterprise infrastructures has significantly in creased the complexity and attack surface of modern networks. Traditional net work security mechanisms—primarily based on static rules and signature-based detection—are increasingly ineffective against advanced persistent threats (APTs), zero-day exploits, polymorphic malware, and encrypted attack chan nels. Artificial Intelligence (AI) and Machine Learning (ML) have emerged as transformative technologies capable of enabling adaptive, predictive, and au tonomous cybersecurity systems. This paper presents a comprehensive technical framework for AI-driven network security. We propose a hybrid architecture in tegrating supervised classification, unsupervised anomaly detection, and deep learning-based behavioral modeling. Mathematical formulations for intrusion detection, anomaly detection, and adversarial robustness are provided. The framework is evaluated using benchmark intrusion detection datasets, and per formance is analyzed using standard metrics including accuracy, precision, re call, F1-score, and ROC-AUC. Results demonstrate that AI-driven models sig nificantly outperform traditional signature-based approaches in detecting zero day and evasive attacks. The paper concludes by discussing adversarial machine learning risks and future directions toward autonomous and self-healing net work security ecosystems.
Authors - Rosa Cristina Pesantez, Estevan Gomez-Torres, Cesar Adrian Guayasamin Abstract - The vast implementation of cloud computing has uplifted the modern IT practices by improving scalability, flexibility, and budget efficiency. In contrast, there has been an increase in energy consumption, which results in carbon emissions. This happens because of overusage, overconsumption, overprovisioning, unused capacity, and inefficient data center management. These days, data centers act as the sole contributor to global greenhouse gas (GHG) emissions; therefore, sustainable cloud operations are essential in addressing this challenge. GreenOps, or green operations, defines the cloud deployment and operational practices that take place but also considers the environmental impact; it depicts energy-efficient infrastructure design, optimized resource usage, virtualization, and the integration of renewable energy resources. This survey presents a summary of green cloud computing, including the current trends, challenges, energy-aware scheduling algorithms, and optimization techniques for obtaining energy-efficient cloud deployment.
Authors - Govind Sambare, Sarika Deokate, Saurabh Dhakite, Sahil Ambokar, Gargi Barve Abstract - Static perimeter-based security architectures are now inef fective in the current threat scenario. The ability of attackers to obtain legitimate credentials and the presence of zero-day exploits often cause real-time breaches of the network perimeter. An area of concern is the real-time monitoring of these systems. In the current scenario, security monitoring is performed in a segregated manner, where network analysts analyze time-stamped network logs and identity analysts analyze time stamped login attempts, without cross-referencing in real time between these two domains. The proposed solution is a fusion platform capable of ingestion of raw network transport data and real-time human element monitoring data. This is achieved through the integration of two dif ferent threat detection mechanisms using a FastAPI backend. The first threat detection system will be the Network Threat Detector (NTD), im plemented in Python and using the Scapy library to parse deep packet data in real time for flow analysis. The second threat detection system will be a JavaScript tracker designed for monitoring digital behavioral indicators and calculating real-time metrics such as mouse velocities, ac celerations, kinematic jerk, and typing speeds. Real-time monitoring will be achieved through a machine learning framework with three different modules for inferring user intent using the Random Forest algorithm, detecting anomalous statistical patterns using the Isolation Forest algo rithm, and detecting malicious plaintext syntax using Logistic Regres sion. The system has been tested in a lab scenario and has been able to classify user session states into four different states: Engaged, Con fused, Frustrated and Suspicious with accuracy exceeding 95%. These digital behavioral indicators will be fed into the Network Transport Data (NTD), allowing the computation of a real-time risk score.
Authors - Duc Thinh Nguyen, Diem Huyen Nguyen Ngoc, Khoa Tran Thi-Minh Abstract - In the present-day context, presentations and computer-based interac tion play a crucial role in various domains, particularly in education and business. Traditionally, users have to rely on physical devices such as mouses, keyboards, or laser. Although these devices meet the basic requirements, they still reveal many limitations regarding mobility, continuity, and dependence on battery life. To address these limitations, hand gesture-based presentation control systems have emerged as a promising solution due to their intuitive, natural, and engaging interaction style. This paper proposes a touchless system that enables users to control common desktop operations as well as presentations in a natural manner using hand gestures captured via a standard webcam. The proposed system lev erages OpenCV for real-time video acquisition and preprocessing, while Medi aPipe framework is employed for hand tracking and landmark extraction. From the experiments, our system can process in real-time with the accuracy of approx imately 92%. As a result, users can seamlessly control slides, use virtual mouse operations, annotate presentation content, and engage with the audience in a more interactive and natural way without physical contact.
Authors - Deepali Lokare, Pankaj Chandre, Prashant Dhotre Abstract - The rapid expansion of digital services has significantly increased the collection and processing of personal data through online platforms such as e-commerce systems, social media applications, and digital payment services. To regulate the use of personal information, governments worldwide have introduced data protection regulations such as the General Data Protection Regulation (GDPR), the Digital Personal Data Protection Act (DPDPA), and the California Consumer Privacy Act (CCPA). Organizations publish privacy policies to inform users about their data practices; however, these policies are often lengthy, complex, and difficult for users to understand. Consequently, users frequently accept privacy policies without fully reviewing how their personal data is collected, processed, and shared. Recent research has explored automated approaches for privacy policy analysis using artificial intelligence techniques, including machine learning, natural language processing, and large language models. Retrieval-Augmented Generation (RAG) has further enhanced compliance evaluation by linking policy statements with relevant regulatory clauses. Despite these advancements, challenges remain, such as the lack of standardised datasets, limited explainability of AI decisions, dependence on prompt design, and insufficient validation with regulatory experts. This paper discusses future research directions in AI-driven privacy policy compliance analysis and highlights emerging opportunities for improving regulatory compliance assessment, user privacy protection, and transparent privacy governance in digital ecosystems.
Authors - Samiksha M, Sharanya G S, Shrina Anahosur, Surabhi K C, Surabhi Narayan Abstract - Multi-angle image synthesis is highly important when it comes to the generation of 3D scenes. But the current methods are either ex pensive in terms of computational costs or lack photorealism in their outputs. We propose a novel sketch and text based multiview image generation approach that solves the above-mentioned problems by mak ing use of multimodal diffusion models efficiently. Our pipeline utilises DreamShaper v8 for converting the input sketch and text into a pho torealistic 2D image and then passes this 2D image into a fine-tuned Zero123plus model for the final generation of consistent multiview im ages, showing a 43.69% improvement in the overall perceptual quality compared to baseline sketch-to-multiview models. Moreover, our pipeline shows flexibility in scalability by generating anywhere from 6 to 64 consis tent multiview images according to the requirements of the downstream tasks. We demonstrate the success of our pipeline through extensive ex periments conducted using voxel-based grid approaches and Neural Ra diance Fields (NeRF). Our pipeline greatly reduces computational costs, all while maintaining photorealism in the outputs, confirming the poten tial of sketch and text based multimodal conditioning as an intuitive and efficient paradigm for controlled 3D content generation.
Authors - Balasubramanian M, Arasu Prabhu V S, Nalini Subramanian Abstract - Privilege Escalation is a major issue for securing Linux sys tems. When a user gains unauthorized root access he has the ability to access all system resources and manipulate them at will. In the past, Linux has used Static Access Control Policies and User Space Monitoring Tools to secure system access. However, these methods provide little in sight into how the kernel is modifying users credentials when permissions are changed. In this paper we propose a Kernel-Level solution to detect and prevent unauthorized privilege escalations. This detection/ preven tion occurs in real time via a Credential Transition Monitoring Mecha nism within the kernel layer, which prevents the elevation of privileges by illegal means. To create the functionality necessary for the above, a Linux Kernel Module (LKM) was created which utilizes kprobes to in tercept calls to the commit creds() function, which is used to update a processes credentials in the kernel. To evaluate if the privilege escalation being requested is legitimate or malicious, the LKM contains a Policy Based Evaluation Mechanism which evaluates each request to modify a process’s credentials. We tested our proposed solution using a con trolled test environment composed of a Virtual Machine (VM) running the Ubuntu Operating System. We ran two types of tests, first were Le gitimate Administrative Operations utilizing the ”sudo” utility, second were Simulated Privilege Escalation Attacks based upon SetUID Vul nerabilities. Our results show that the system effectively detected and blocked malicious privilege escalations, while providing minimal over head to normal system operation.
Authors - Noel Milliones, Vicente Pitogo, Mark Phil Pacot Abstract - The sensitive information in the healthcare industry along with the increasing phe nomenon of the use of intelligent health-related devices makes it a very difficult task to ensure the privacy of patients as well as carry out precise analysis. The centralized methodology in cur-rent machine learning models requires the exchange of raw information of patients from different healthcare institutions and health related devices to the centralized computer system through the network. However, due to the privacy issues and network traffic issues in this methodology, the proposal proposes the development of a privacy-preserving health analytics platform. Here in this proposed methodology, every healthcare center as well as health-related device has its own local machine learning model without transferring even a single piece of information outside. However, the models also employ disease-specific models including CNN heart diseases models of 95 percent accuracy, Gradient Boosting Classifier Diabetes models of 93 percent accuracy models, along with SVM models of liver diseases along with 96 percent GridSearch models. Each edge device carries out the data preprocessing for the local environment, as well as the processes of model training and the transmission of secure updates, in such a way that the sensitive patient data has never left the environment. The platform presented proves the idea that edge computing and collaborative learning can lead to scalable and secure healthcare analytics with high predictive performance.
Authors - Etambuyu Akufuna, Mayumbo Nyirenda, Ruth Wahila, Marjorie kabinga Makukula Abstract - As the primary cause of death worldwide, cardiovascular disease (CVD) necessitates accurate early detection methods. We provide a machine learning approach for predicting heart illness using clinical health data that is enabled by the Internet of Things. An SVM classifier that was trained using 14 Cleveland Heart The disease dataset separates patients at high risk from those in good health. Preprocessing, feature standardisation, and GridSearch Cross-Validation hyperparameter optimisation are all included in the workflow. The model outperforms a number of benchmark techniques in the literature with an accuracy of 93.33% and an AUC of 0.97. A scalable and comprehensible basis for IoT-based clinical decision assistance is confirmed by comparative outcomes.
Assistant Professor, Assistant Head Research- Department of Information Technology, Vishwakarma Institute of Technology Pune (Affiliated to Savitribai Phule Pune University, Maharashtra, India
Authors - Thapanapong Sararat, Ratanachote Thienmongkol, Ruethai Nimnoi, Wongpanya S. Nuankaew, Pratya Nuankaew Abstract - Ensuring equitable access to library information systems is crucial in the digital era, particularly for visually impaired users who rely on assistive technologies. WebOPACs are key gateways to resources, but many remain difficult to use despite referencing accessibility standards. This study proposes a Disability-Centered Framework to improve accessibility and Universal Design in Thailand’s WebOPACs. Developed through design-based research, it integrates international accessibility literature, Universal Design principles, WCAG 2.1, and evaluation insights. The framework emphasizes three components: disability-focused design principles, classification of visually impaired users and needs, and task-specific accessibility requirements across perception, navigation, interaction, and assistive-technology compatibility. It also incorporates Thai linguistic, cultural, and technological conditions to bridge global standards and local implementation. Findings indicate that meaningful accessibility requires iterative testing and ongoing refinement rather than a one-time compliance check. This framework guides libraries, developers, and policymakers in enhancing WebOPAC accessibility and supporting inclusive access for visually impaired users in Thailand.
Authors - Srishti Mathur, Hrishita Patra, Suhani Verma, Dhruva R Prasad, Shylaja S.S Abstract - The conventional way of preparing an advertisement is an elaborate process incorporating human subjectivity and human resources heavily dependent on creativity. Making advertisements by human effort can be regarded as an inefficient utilization of capital for small to medium-scale businesses due to increased cost of production. Even in current advancements in the development of generative techniques including LLM-based strategies for Advertisement Generation with Prompts, creating apt prompts for the depiction of products requires human expertise, making them less accessible. In order to overcome the challenges presented by the current models, we introduce a fast, affordable, and scalable platform for the automation of advertisement generation for products leveraging the capabilities of pre-trained diffusion models. The proposed system requires no training or fine-tuning since everything is performed at the inference level. The AI-aware system for designing assists in the identification of color schemes and attributes from the images of the products, whereas the descriptions and categories of the items help identify the theme and pattern recommendations for advertisements. These recommendations are channeled through a pre-trained Stable diffusion model guided by the LLaMA language model.
Authors - Sneha Visveswaran, Tanmay Praveen, Vidula Gurudutta, Yamini Sridhar, Chaithra T S5 Abstract - Arecanut crop management has traditionally depended on manual inspection for disease identification and harvest readiness assessment, a method that is both time-consuming and susceptible to human error. This study introduces an automated, image-based system designed to address two primary tasks: disease classification and ripeness assessment. The proposed pipeline initiates with data preparation, including resizing, normalization, and augmentation of arecanut images to enhance model robustness. A convolutional neural network architecture, incorporating additional feature extraction and optimization layers, is utilized to detect disease symptoms. A comparable deep-learning model is trained to classify ripeness stages based on visual characteristics. Model performance is evaluated using accuracy, precision, recall, and F1-score metrics to ensure reliability. The system is implemented via a user-friendly web interface, which allows real-time image uploads and immediate predictions, thereby facilitating practical application for farmers and agricultural stakeholders. This integrated solution provides a scalable and cost-effective approach to improving crop monitoring and supporting data-driven decision-making in arecanut cultivation.
Authors - Kate Lorreine M. Colot, Anjeneth G. Molina, Freely M. Wasawas, Ferlyn P. Calanda, Shem L. Gonzales, Richard B. Colasito Abstract - Despite the availability of digital voting systems, prior studies continue to identify gaps such as weak or voter authentication, security vulnerabilities and insufficient fraud prevention mechanisms. This paper presents BotoSafe, a secure and user-centered electronic voting (e-voting) platform developed for student government elections within educational institutions. The system implements multifactor authentication (MFA) using one-time password (OTP) verification and facial recognition with an anti-spoofing mechanism. To ensure the confidentiality and integrity of the voting process we employ the Advanced Encryption Standard in Galois/Counter Mode (AES-GCM). A developmental research design with a quantitative approach was used for the system development and evaluation. A mock election involving 84 students from Western Mindanao State University–Pagadian Campus was conducted, followed by a post-assessment survey. Results from the System Usability Scale (SUS) yielded a score of 72.08, indicating acceptable usability. User responses further showed that the system is easy to use, safe, and trustworthy for student elections. These findings indicate that BotoSafe is a viable e-voting solution for student government elections and may be further enhanced in future studies.
Authors - Eliza Borkute, Michael Savariapitchai, Vijeyandra Shahu, Deepak Sharma Chetan Parlikar Abstract - The current study aims to examine the significance of trust, perceived security, and awareness as factors that influence the adoption rate of UPI among private sector employees within the region of Chandrapur. The structured ques tionnaire has been designed to measure the following: a) trust factor regarding data protection and the correctness of the operations; b) perceived security level of UPI; c) awareness and knowledge about UPI functions; d) demographic characteristics related to education level, annual earning capacity, and age; and e) actual level of UPI adoption involving the use rate, continuous use of UPI, recommendations, and its integration with financial activities. Nonparametric statistical methods were used, including Spearman's rank correlation by investi gating the relationships of trust, security perception, awareness, and adoption. Kruskal-Wallis tests were conducted for finding group differences between ed ucation level and usage frequency. The results have accounted for strong, posi tive, and statistically significant associations between consumer trust, perceived security, awareness, and UPI adoption indicators. Education level revealed a partial moderating effect. Educated respondents tend to show higher trust and usage frequency in selected trust dimensions. However, this is not the case in all the aspects of this dimension. Additionally, the frequent users of UPI exhibit greater trust compared to the occasional users.
Authors - Tanay Balakrishna, Vishal Kumar Rahul, Yugabharathi E, Samanvi P, Vinay Joshi Abstract - The rapid spread of online news has made it more difficult to distinguish factually based reporting from misleading content. Many factchecking systems fail to detect false articles that appear professional and realistic, which leads to widespread disinformation. Most models rely on surface characteristics and neglect semantic coherence and factual consistency. An Improved Hybrid Fact-Checking System that combines language understanding, adversarial training, rule-based plausibility checks, and claim level web verification. These components run together in an ensemble model using BERT, BiLSTM, and an XGBoost meta-classifier to merge multiple evidence sources. Experiments on benchmark and curated datasets show an accuracy of 96.84% and a recall of 98%, outperforming existing deep learning methods. The results show that blending linguistic analysis with external verification leads to a robust and interpretable approach for automated fact-checking
Authors - Shraddha Mankar, Tanishq Thuse, Prasanna Khebade, Ritik Kumar Singh, Shravani Shirpurkar Abstract - Coronal Mass Ejections (CMEs) occurring in halo configuration are considered one of the most serious threats coming from space weather that can cause disruptions to most of the Earth’s geomagnetic facilities. The present study is about a hybrid machine learning system that detects the halo CMEs and predicts their Earth impact in real-time using the particle data coming from the in-situ India’s Aditya-L1 mission placed at L1 Lagrange point. We apply physics-informed feature extraction from SWIS-ASPEX payload measurements, obtaining alpha-to-proton density ratios, bulk velocity gradients, thermal parameters, and velocity anisotropy indices as CME markings. A Long Short-Term Memory (LSTM) neural network tuned through the Spider Cuckoo Optimization Algorithm processes 24-hour sequential windows of these features to distinguish between CME and non-CME events. The system also includes the modeling of Parker spiral propagation for Earth arrival time estimation and it is made available through a React-based dashboard with explainable AI components. The performance of the system reveals that it achieves a 98% detection rate along with a mean absolute error of 0.001 in the prediction of the normalized impact index. A comparison with historic halo CME catalogs indicates that our method has reduced false alarms by 85% when compared with threshold-based techniques while keeping the recall rate at 90%. The operational version of the system grants a 45-60 minute notification for the arrival of the CME, thus enabling the sensitive infrastructure to take preventive measures.
Authors - Sabo Hermawan, Ryna Parlyna, Surya Anugrah, Inkreswari Retno Hardini, Bayu Suhendry, Ria Rahma Nida, Windy Permata Suyono, Nur Lisa Rahmaningtyas, Eka Septariana Puspa, Cornellius Seno Adriano, Alifah Nur Rahmawati Abstract - Smart parking systems have developed as a critical solution to urban challenges such as traffic congestion, disorganized space utilization, and delays in manual parking searches. This study presents a smart parking framework that employs a Raspberry Pi 4GB, a camera module, and a servo motor for automated parking management. The system integrates a Haar Cascade classifier and YOLOv11 for accurate vehicle detection, while utilizing IR and ultrasonic sensors for obstacle identification. Real-time slot availability is displayed through an LCD interface. To ensure uninterrupted functionality, the system is powered by a solar panel with a rechargeable battery, enabling autonomous operation during power outages. Experimental results validate the reliability of vehicle recognition under varying illumination conditions, efficient gate control, and improved accuracy compared to conventional sensor-based approaches. This design offers a scalable, cost-effective, and energy-sustainable framework for urban parking solutions. Future work includes integration with cloud-based IoT platforms for centralized monitoring, optimization of YOLOv11 through lightweight variants for edge deployment, and extension to multi-level parking facilities with dynamic slot availability updates.
Authors - Sabo Hermawan, Ryna Parlyna, Surya Anugrah, Inkreswari Retno Hardini, Bayu Suhendry, Ria Rahma Nida, Eka Dewi Utari, Nur Lisa Rahmaningtyas, Cornellius Seno Adriano, Alifah Nur Rahmawati Abstract - This research investigates the performance of transformer-based models, BERT, ALBERT, and RoBERTa, fine-tuned for sentiment classification on the Women’s Clothing E-Commerce Reviews dataset. The overall task was executed under both 3-class and 5-class sentiment classification schemes. Each model was trained under the same conditions and evaluated comprehensively. In the 3-class task, RoBERTa achieved an F1-score of 91.7% and an AUC of 0.967, surpassing previous best-reported results. BERT also showed competitive performance with an F1-score of 90.2% and an AUC of 0.951. These results establish the superior generalisation ability and discriminative power of transformer models, particularly RoBERTa, in classifying sentiment from unstructured review text. ALBERT, while computationally efficient, showed reduced accuracy and AUC, indicating that extensive parameter sharing can hinder fine-grained sentiment resolution. The models exhibit broadly consistent behaviour in the 5-class setting, with RoBERTa maintaining a lead. A modest decline in F1 and AUC is evident, reflecting the greater difficulty introduced by finer class granularity. This research validates transformer architectures in a commercial Natural Language Processing scenario, demonstrating the superiority of transformer-based models over traditional baselines in both accuracy and robustness.
Authors - Jitesh Kriplani, Michael Savariapitchai, Vijeyandra Shahu, Deepak Sharma, Chetan Parlikar Abstract - The present investigation discusses the influence of social media in fluencers on the choices made by consumers and their buying behavior, espe cially in connection with important personality traits of the influencer, such as emotional engagement, authenticity, and reliability. The scientists conducted a well-organized survey questionnaire that collected primary information from 360 respondents in the Wardha District. Using Spearman's rank correlations re sults indicated strong, positive and statistically significant relationships between influencer behaviors and consumer purchase behaviors indicating that influenc ers have a significant impact on consuming behaviors of consumers. The results of a one-way ANOVA found that perceptions of influencer credibility (includ ing honesty and sponsorship disclosure), as well as perceptions of emotional engagement and authenticity, were significantly different depending on the fre quency of social media use by the participant. The demographic analysis also examined differences in consumer reactions depending on age, gender, and in come, finding no significant difference across age groups, but significant differ ences related to income and gender. The study concludes that consumer en gagement increases with more frequent social media use and influencer effec tiveness is significantly related to the authenticity, transparency, and credibility of the communication. Findings highlighted the need for focused influencer marketing content based on demographics providing empirical evidence of in fluencer marketing on consumer behavior.
Authors - Jason Elroy Martis, Ronith, Anvitha Rao, Vignesh Salian, Apoorva Shetty, Philomina Princiya Mascarenhas Abstract - The task of recovering high-level architectures from embedded software systems is error-prone and difficult, and state-of-the-art methods still rely on static analysis or heuristics and lack explainability. To address these challenges, an explainable and automated method for recovering high-level architectural diagrams directly from source code is suggested. Specifically, this method begins with the generation of function call graphs at the function level via static analysis and functions grouping into domain-agnostic component classes, generating a component graph. Components are then augmented with semantic attributes learned via CodeBERT embeddings, facilitating a light graph convolutional network (light GCN) model for learning-component interactions reflecting structure and semantics. Methods for explainability via gradients are incorporated for emphasizing prominent components and edges, helping in developer understanding, validation, and tuning of predicted architectures. The performance of this method on several embedded projects showed accuracy as high as 91.87%, precision of 96.48%, recall of 86.90%, and an F1-score of 91.44%. Use cases have shown successful extraction and interpretation of critical paths, bottlenecks, and unusual architectures and highlight explainable insights that enable efficient analysis and thus make it a highly significant progress in explainable AI for embedded software.
Authors - Nazia Sultana, Kumar P K Abstract - This research details the design and implementation of the AI-Driven Penalty Performance Analysis System, a desktop application aimed at bridging the technological divide in football analytics. The system focuses particularly on environmental and situational influences, such as crowd size, match context, and time of day, on penalty outcomes. The system employs a robust data pipeline and a comparative evaluation of multiple machine learning classifiers to predict the likelihood of penalty kick success. Using a dataset of professional penalties, we engineered novel features such as a ‘PressureIndex‘ to quantify situational fac tors. A suite of models, including Logistic Regression, K-Nearest Neighbours, Decision Tree, Random Forest, and Gradient Boosting, was trained and evalu ated. The optimal Gradient Boosting model achieved an accuracy of 79.1% and an AUC-ROC score of 0.87. A critical contribution is the integration of Explain able AI (XAI) using SHapley Additive exPlanations (SHAP), which transforms the system from a predictive ’black box’ into a transparent, diagnostic tool. This provides coaches and players with actionable, data-driven insights, validating the system’s potential to democratize advanced sports analytics.
Authors - Ankita Manohar Walawalkar, Chun-Wei Remen Lin, Suman Kumar, Ming-Yen Wang Abstract - The growing dependence on digital platforms for service discovery has revealed a substantial visibility gap for local businesses and independent service providers. Skilled professionals, in-cluding electricians, beauticians, bakers, tutors, mechanics, tailors, and photographers, frequently encounter challenges in reaching potential customers due to limited marketing expertise, financial barriers, and the lack of an integrated digital marketplace. This study introduces SkillBizz, a mo-bile platform intended to connect local service providers and businesses with nearby users through a community-driven, location-aware interface. The application features a scrollable home feed that prioritizes services and businesses based on geographical proximity, allowing users to refine their results using filters such as service category, budget range, distance, and popularity. Service providers can promote their offerings through multimedia posts that highlight services, offers, and announcements, while users engage through familiar social media features, including likes, comments, saves, and shares. By facilitating free and organic visibility without reliance on paid advertising, SkillBizz aims to support local entrepreneurship and foster trust-based service discovery. The proposed platform aims to create a digital marketplace that seeks to enhance com-munity engagement, improve service accessibility, and promote sustainable economic growth. In a short survey, students rated the app’s ease of navigation and overall usefulness highly, with an average satisfaction score of 4.5/5, indicating strong acceptance and positive user experience. Shop owners noted that the app provides an easy way to share product updates, promotions, and service news directly with local customers, with 80% expressing interest in continued usage due to time-saving benefits and improved customer reach.
Authors - Karuna A. Katakadhond, Manohar Madgi Abstract - Groundnut being a major oilseed crops, contributes to nearly 10% of the total value of produce from agricultural crops in India. Several researches indicate that disease infestations at different stages of crop growth can lead to 30-70% of yield reduction and significant economic losses. This challenge can be addressed by using Artificial Intelligence (AI) based smart monitoring and recommendation systems through early detection, identification, and prediction of crop diseases. The primary objective of the study is to develop an AI driven smart monitoring framework capable of detecting, identifying, and predicting biotic and abiotic factors responsible for major disease occurrences in groundnut plants. Additionally, the systems goal is to provide an effective and efficient recommendation system for sustainable agriculture from an integrated and practical perspective with its technical and economic performance to the farmers for managing the field level infestations. This includes prediction of diseases and timely recommendation of plant protection chemicals which may reduce the yield loss and enhance the productivity of the crop.
Authors - Usman Ali, Ghulam Mohayud Din, Sajid, Ayesha Ali, Munawar Hussain, Muhammad Mujeeb Akbar Abstract - The proliferation of misinformation on social media poses significant social, political and economic risks. This research proposes an AI-based fake news detection system that leverages deep learning (BERT and LSTM) and Explainable Artificial Intelligence (XAI) frameworks to classify online fake news as Fake or True. The proposed architecture processes textual data through Natural Language Processing (NLP) techniques for semantic and contextual analysis. To ensure Interpretability, SHAP and LIME is Integrated to visualize the rationale behind classification results. The system was trained using balanced datasets augmented through SMOTE, achieving over 95% accuracy. A web-based interface was developed to facilitate real-time text and URL verification, providing confidence scores and explanations. This approach minimizes human intervention, enhances transparency and explainable frameworks yields an accurate and trust-worthy tool for combating misinformation.
Authors - Suphawatchara Malanond, Pongsarun Boonyopakorn Abstract - In the food supply industry, differentiating between cultivated and weedy rice is crucial since the latter interferes with production and competes for essential resources. This research utilizes the YOLOv8 object detection model to automate the classification of rice grains to improve the separation process. The dataset was gathered during the harvesting phase and annotated utilizing a typical bounding-box methodology. Multiple configurations were evaluated with different model sizes (nano, small, medium) and training epochs. The optimal results attained a precision of 0.845, a recall of 0.779, and a mAP@50 of 0.822. These findings indicate that YOLOv8 enables near real-time identification at the grain level, diminishing dependence on manual verification. The study yielded a lightweight prototype developed to demonstrate and reflect the application of the trained model for rapid, image-based screening by non-technical users. The significance of the study lies in its support for more effective rice quality management and its contribution to strengthening food security and sustainable agriculture.
Authors - Wongpanya S. Nuankaew, Parichat Janjom, Khwanchiwa Khumdaeng, Rattiyaporn Laemchat, Thapanapong Sararat, Pratya Nuankaew Abstract - Communication has been a topic as ancient as man and at the same time so important that, over time, various forms have been cre- ated to facilitate it, among which stand out: mail, telephony, telegrams, and fax, to name a few. Nowadays many people use instant messaging applications to communicate with each other by feeling that their con- versations are protected. However, that feeling could not be further from reality and should not be taken lightly, since there are always groups focused on taking advantage of the vulnerability of this kind of applica- tions, resulting in users’ privacy being compromised. In this paper, we present the development of an instant messaging application that inte- grates a novel key establishment protocol based on a quantum-resistant algorithm. Our application employs cutting-edge lattice-based crypto- graphic techniques, ensuring robust security against quantum attacks while maintaining operational efficiency. Obtained results show the ap- plication’s viability by offering a practical solution to safeguard mobile communication in the impending quantum era.
Authors - Rashmi Shivanadhuni, Martha Sheshikala Abstract - The rapid expansion of QR-code payment systems has positioned QRIS as a key component of Indonesia’s national digital payment infrastructure. While prior studies have largely focused on initial adoption, limited empirical evidence explains the factors that sustain long-term usage of QR-code payments in mobile banking. This study investigates the determinants of sustained QRIS adoption by examining the roles of perceived usefulness, perceived ease of use, trust, and perceived security, with user satisfaction as a mediating variable. Using a quantitative approach, survey data were collected from QRIS users of mobile banking applications and analyzed using Structural Equation Modeling (SEM). The results indicate that perceived usefulness, trust, and perceived security significantly enhance user satisfaction, which in turn strongly predicts sustained adoption of QRIS in mobile banking. Perceived ease of use shows a weaker direct effect, suggesting that post-adoption behavior is driven more by value realization and trust than by usability alone. These findings contribute to ICT and fintech literature by highlighting user satisfaction as a critical post-adoption mechanism for sustaining engagement with national digital payment systems. Practically, the study offers insights for policymakers, banks, and system designers to strengthen the long-term viability of QR-based payment infrastructures through trust-building and value-enhancing strategies.
Authors - Suman Kumar, Yeneneh Tamirat Negash, Ankita Manohar Walawalkar, Ming-Yen Wang Abstract - The backbone of modern data infrastructure which demands strategies to ensure data availability and uptime is Cloud Storage. This paper provides a complete overview of redundancy models and storage techniques that are used to maintain data availability and uptime in cloud storage systems. It covers core redundancy methods like data replication, erasure coding, Raid and disk-level redundancy, multi-cloud redundancy and hybrid models. This paper also provides storage techniques that support data availability like distributed file systems and object storage platforms for scalability and flexible access. Additionally, the paper also presents a literature review of key research findings and compares models that demonstrates substantial improvements in reliability and storage efficiency. It also covers the challenges related to computational complexity and monitoring precision. By synthesizing theoretical and practical perspectives, this research guides the design of cloud storage solution which balance availability, cost and recovery objectives and also help stakeholders to meet stringent service level agreements in increasingly heterogeneous and large-scale cloud infrastructure.
Authors - Massoud Moslehpour, Suman Kumar, Hanif Rizaldy, Ankita Manohar Walawalkar, Thanaporn Phattanaviroj Abstract - Accurate identification of paddy crop growth stages plays a crucial role in effective agricultural planning, crop management, and yield estimation. Paddy cultivation is highly sensitive to environmental conditions, disease progression, and growth variability, making continuous and automated monitoring essential. This paper presents an AI-driven framework for automated paddy growth stage identification and yield readiness estimation using deep convolutional neural networks. The proposed system employs the EfficientNetV2-S architecture trained on heterogeneous paddy plant image datasets collected from multiple public sources. To address inconsistencies in labeling across datasets, a semantic stage-mapping mechanism is introduced to map dataset-specific visual classes into standardized paddy growth stages. Furthermore, a confidence-weighted yield readiness index is formulated to provide an interpretable estimate of crop maturity and harvest readiness based on predicted growth stages. The trained model is deployed using a Flask-based web application that supports real-time inference, result visualization, and storage of historical predictions. Experimental results demonstrate stable convergence, high classification accuracy, and reliable generalization across different growth stages. The proposed framework effectively bridges visual growth stage classification and yield estimation, offering a practical and scalable solution for precision agriculture and decision support systems.
Authors - Shruti Thakur, Shilpa Nikhil Bhosale, Priti Prakash Jorvekar, Sandeep Muktinath Chitalkar, Harshala Shingne, Rupali Vairagade Abstract - This study examines the effectiveness of ensemble learning models for detecting fraud in e-wallet transactions under extreme class imbalance and temporal dependence. Using the PaySim bench-mark dataset, a time-aware experimental framework is developed that incorporates forward-chaining evaluation, imbalance-aware resampling, hyperparameter optimisation, probability calibration, and cost-sensitive threshold tuning to reflect real-world deployment conditions. RF and XGBoost are systematically compared across multiple dataset scales and train–test splitting strategies. Empirical findings show that XGBoost consistently outperforms RF, achieving the highest F1-score, maintaining PR-AUC above 0.88, and demonstrating near-perfect ROC-AUC, indicating strong discriminative capability. Following isotonic calibration, XGBoost also produces the lowest Brier score, highlighting superior probability reliability for risk-based decisions. Performance gains plateau beyond a 75% training share, while XGBoost preserves stable performance as the test window expands, unlike RF. Overall, the results support prioritising gradient boosting models, adopting time-aware validation, and integrating calibrated risk scoring in operational e-wallet fraud detection systems.
Authors - Shoh-Jakhon Khamdаmov, Muazzam Akramova, Rano Abdullaevna Sadikova, Azamat Kasimov, Jasurbek Pozilovich Kurbonov, Alisher Bakberganovich Sherov, Dilshoda Akramova Abstract - Attention Deficit Hyperactivity Disorder (ADHD) is one of the most common neurodevelopmental disorders in children, characterized by inattention, hyperactivity, and impulsivity that impair academic and social functioning. Due to its heterogeneous presentation and symptom overlap with other cognitive disorders, early and accurate diagnosis remains challenging. This study proposes a multimodal machine learning framework integrating behavioral, neuroimaging, and physiological data to predict ADHD in children. Convolutional Neural Networks (CNNs) are used to extract features from brain MRI scans, Long Short-Term Memory (LSTM) networks model temporal patterns in physiological signals such as EEG and heart rate variability, and ensemble learning methods incorporate behavioral and clinical attributes. Both feature-level and decision-level fusion strategies are evaluated. Results on benchmark datasets show that the multimodal model consistently outperforms unimodal approaches in accuracy, sensitivity, and F1- score, demonstrating the potential of AI-driven multimodal systems for early, objective, and interpretable ADHD diagnosis.
Authors - Matjere Matsebe, Nobubele Angel Shozi Abstract - It is possible to increase the acceptability of small wind turbines for wind regions with low wind velocities for rural as well as urban sectors by placing them inside diffusers. The research on development of various diffusers is a major re-search area nowadays. Curved flanged diffusers can deliver better performance by adding a cylindrical throat section between converging and diverging sections. This research paper presents a systematic study on short curved flanged diffusers with converging-diverging sections and extended uniform throat between them. Twenty-five diffuser models are studied using Computational Fluid Dynamics using ANSYS Fluent. These models are finalized using the design of experiments for six variables at five levels. The throat diameter for all diffuser models is fixed. The investigation is performed by considering radial average velocity and percentage velocity variation along the radial planes. The global velocities are observed as 1.18 to 1.47 times that of the radial average velocities. The diffuser dimensions are optimized to maximize radial average velocity and to minimize the velocity variation along the radial planes. The diffuser with optimized dimensions is manufactured and tested experimentally in a wind tunnel. Good matching is seen between the predicted results and experimental results. The optimized diffuser has the ability to produce more than two times the power that of the turbine without a diffuser.
Authors - Murodov Gayrat Nekovich, Kholmuhamedov Bakhtiyor Farkhodovich, Avezov Sukhrob Sobirovich, Khudayberganov Nizomaddin Uktambay ogli, Yunusova Maftuna Shokirovna, Mansurova Shahinabonu Najmiddin qizi Abstract - The classification of ECG signals continues to be a major focus in intelligent healthcare systems, especially for the early identification of cardiac arrhythmias. In this work, we propose a hybrid probabilistic neural strategy that integrates Bayesian Networks with Artificial Neural Networks (ANNs) to enhance the reliability of ECG classification. The approach begins by extracting informative ECG features, such as crosscorrelation and phase-based characteristics. A Bayesian Network is then applied to model the probabilistic dependencies among these features and identify those most relevant to classification. At the same time, an ANN is trained on the refined feature set to learn complex non-linear patterns present in the signals. The two models are subsequently combined through a weighted voting mechanism to form an ensemble classifier. Experimental evaluation using an ECG dataset indicates that the proposed ensemble achieves higher accuracy and stability compared to its individual components. Notably, the method demonstrates strong capability in distinguishing multiple arrhythmia categories, which are typically difficult to classify. Overall, the results highlight the promise of hybrid probabilistic–neural models for improving automated ECG interpretation and supporting more accurate diagnosis of cardiac abnormalities.
Authors - K S Shubham, Uma Mudengudi, Ujwala Patil Abstract - Secure, compliant, and interoperable data sharing remains a core bottleneck for cross-organizational analytics and AI, particularly under evolving privacy regulations, contractual obligations, and adversarial threats. This paper introduces HARMONIA, a pluggable, risk-aware data sharing framework that integrates policy-as-code enforcement, continuous compliance monitoring, provenance-grade evidence, and revocation with machine unlearning. HARMONIA is inspired by the iterative Analyzer–Mechanic and Conductor–Observer operational pattern described in the HARMONIA strategic perspective, generalizing its quality-gate-and-repair loop to a policyand- risk-gated release lifecycle. We formalize an architecture that separates governance, control, and data planes; define a release-mode lattice that enables explainable fallbacks among raw export, masking, kanonymity, differential privacy, synthetic data, query-only access, and federated compute; and propose an evidence model aligned with W3C PROV. We provide a proof-of-concept (POC) blueprint implemented with commodity components (OPA, OAuth2/OIDC, PostgreSQL, and object storage) and specify interfaces that support end-to-end request-to-release-to-revocation workflows, including batch-scoped unlearning for model derivatives. The paper concludes with an evaluation methodology and a standards-aligned roadmap for deployment in sovereign data spaces.
Authors - Mehzabul Hoque Nahid, Fatema Tuz Zahra, Mubashshir Bin Mahbub, Saleh Ahmed Jalal Siam Abstract - Personalizing learning in higher education presents a significant challenge due to the difficulty of providing individual feedback to large student cohorts. This study proposes an intelligent tutoring system based on a multi-agent architecture utilizing Large Language Models (LLMs) to address scalability and adaptability issues. The proposed architecture integrates two complementary subsystems: a reactive module that answers student queries using Retrieval-Augmented Generation (RAG) to ensure accuracy based on course materials, and a proactive module that autonomously analyzes student profiles to generate personalized study plans without direct instructor intervention. The system was implemented using Lang- Graph for agent orchestration and MongoDB for state persistence. Experimental validation was conducted using a curated golden dataset from a university course. Results demonstrate a retrieval precision of 94.2% and a faithfulness score of 87.8%, significantly mitigating hallucinations common in monolithic models. Furthermore, the operational cost analysis indicates high financial viability for mass implementation. This dual approach offers a robust solution for automated, highquality educational support, effectively bridging the gap between standardized teaching and personalized learning needs.
Authors - Vemuri Bharath Kumar, Anjan Babu G Abstract - Healthcare data scarcity poses significant challenges for machine learning applications in clinical settings, particularly for conditions with limited patient populations. This paper presents a novel quantumenhanced data augmentation framework that addresses this challenge through a three-pillar architecture: Quantum Random Number Generation (QRNG) for true randomness, Statistical AI for intelligent parameter optimization, and Generative AI for clinical interpretability. Our implementation utilizes Bell state quantum circuits to generate genuinely random perturbations, ensuring higher entropy than classical pseudorandom methods. The framework incorporates medical domain knowledge through constraint-aware augmentation, maintaining clinical validity while generating synthetic patient records. Experimental evaluation on the Pima Indians Diabetes dataset (768 samples, 8 features) demonstrates that our quantum-enhanced approach achieves 100% medical constraint compliance while generating high-quality synthetic data. The system provides both command-line and web interfaces, with automatic fallback to classical methods when quantum resources are unavailable. Our contributions include: the first practical application of quantum computing to healthcare data augmentation, an AI-driven optimization system that automatically determines augmentation parameters, integration with large language models for non-technical summarization of validation reports, and a production-ready implementation with comprehensive validation mechanisms. The framework represents a significant advancement in synthetic medical data generation, offering a scalable solution for addressing data scarcity in healthcare AI applications.
Authors - Sandhya Awate, Vipin Kumar Gupta Abstract - Rural communities face significant challenges in accessing essential healthcare services due to language barriers, limited health literacy, and insufficient medical support. Difficulties in understanding medical information, communicating symptoms, and interpreting diagnostic reports further hinder effective healthcare delivery. Additionally, unreliable internet connectivity restricts the reach of conventional digital health platforms. To address these challenges, this paper presents a Multilingual AI Health Assistant designed to operate on low-cost edge devices, enabling offline functionality to ensure continuous access and data privacy in low-connectivity areas. The proposed system integrates Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Optical Character Recognition (OCR), and speech recognition, allowing users to interact in their native languages via text or voice. It analyzes user-reported symptoms to predict probable health conditions, translates complex medical reports and prescriptions into simplified, localized explanations, and provides recommendations for nearby healthcare facilities. Unlike internetdependent telemedicine systems, this edge-based solution processes data directly on the device, safeguarding sensitive health information while maintaining reliability. By bridging linguistic and literacy gaps, the proposed assistant empowers rural populations with accessible and actionable healthcare insights, ultimately improving health outcomes in underserved regions.
Authors - P. Sivaperumal, R. Naresh, S. Prawin, B. E. Viruthatchanan Abstract - The food portion estimation is a critical component of automated dietary assessment systems, enabling better monitoring of nutritional intake and supporting healthcare, weight management, and public health applications. Traditional self-reporting methods are often inaccurate and time-consuming, motivating the need for computer vision–based approaches that can reliably estimate food portions from images captured in real-world conditions. This paper presents deep learning pipeline for food portion estimation that integrates image preprocessing, deep learning–based segmentation, and geometric volume computation. The data preprocessing with Mask R-CNN used for precise food seg-mentation, providing pixel-level masks and bounding boxes that isolate individual food items from complex backgrounds. The segmented mask is used to estimate the pixel area of the food region. Experimental evaluation demonstrates that the proposed method achieves high segmentation accuracy, with a segmentation IoU of 87.6%, precision of 90.3%, recall of 88.9%, and an F1-score of 89.6%. The pixel area estimation error is limited to 6.8%, resulting in an overall portion estimation accuracy of 89.1%, indicating reliable and consistent performance across different food images. The proposed framework highlights the effectiveness of combining deep instance segmentation with geometric volume estimation for accurate food portion assessment. Future work will focus on multi-view image integration and real-time deployment in mobile dietary monitoring systems to enhance robustness and scalability.
Authors - Chalani Dinitha, Saadh Jawwadh Abstract - Automated Image Enhancement from CCTV surveillance relies heavily on accurate image segmentation; however, real-world footage is often degraded by low illumination, motion blur, occlusion, and background clutter, causing conventional segmentation models to lose boundary precision and small object details. This paper proposes EdgeLite-CrimSegNet, a novel lightweight boundary-aware segmentation network designed specifically for crime scene analysis. Unlike existing fast segmentation models that prioritize global context, the proposed architecture adopts a boundary-first learning strategy, where crime-relevant contours are explicitly extracted and refined before region-level segmentation. A compact edge-aware encoder, boundary-guided feature refinement module, and progressive region filling strategy are introduced to improve segmentation accuracy while maintaining real-time performance. Experiments on CCTV frames derived from the UCF-Crime dataset demonstrate improved boundary preservation, higher IOU, and better segmentation of overlapping and small objects compared to conventional lightweight segmentation networks, confirming the suitability of EdgeLite-CrimSegNet for real-time surveillance applications.
Authors - Sunkyo Jeong, Yongbeom Park Abstract - The brisk developments of advanced deep learning techniques have led to diverse applications of it in different sectors, including healthcare sec tor. Breast cancer is one of the most common and deadly cancer amongst women and the success percentage of the treatment depends heavily on the stage at which the detection happens. This field opens gateway of deep learn ing application in detecting of breast cancer tumour type at an early stage. In this research paper, model and the application of a CNN based early breast cancer detection algorithm is proposed. In this approach, the Wisconsin Hos pital Breast Cancer Database is considered to train the model and test the accu racy of the model. This study shows promising results by concluding Convolu tional neural network-based model is 98.24 % accurate which this better than previous models. Moreover, this paper proves that such application of deep learning techniques holds huge promise for bettering healthcare sector.
Authors - Francklin Rivas, Thanh Tran, Jorge J Roman, Aysha Al Ketbi Abstract - The rapid proliferation of GenAI has transformed the phishing threat landscape into one characterized by realistic, tailored, and scalable attacks on text-based, web-based, and multimodal platforms. The success rate of social engineering attacks has increased significantly due to advances in large language models, deep-fake technology, and automated phishing-as-a-service offerings. Despite notable advances in current phishing detection technologies, many oper ate as black-box systems and struggle to detect AI-generated, context-specific, zero-day phishing attempts. The resulting lack of transparency, combined with poor realistic dataset quality and inadequate resilience against adaptive threats, has further amplified trust concerns. This survey presents a comprehensive over view of the detection strategies based on semantic, structural, and multi-quality feature representations, with a concise review of the models of GenAI-enabled phishing attacks. Various detection methodologies, including machine learning, deep learning, and fusion-based techniques, are reviewed, with an emphasis on explainable AI methods like SHAP, LIME, attention visualization, and Grad CAM, which provide more understandable interpretations of AI-driven deci sions. To facilitate transparent, reliable, and trustworthy phishing defenses that make use of GenAI, the survey concludes with discussions of response mecha nisms, privacy-preserving learning strategies, and governance issues, with open questions and potential directions for future research.
Authors - Malika Acharya, Ankit Jain Abstract - Since Lin and Zadeh proposed granular computing in 1996, an increasing number of researchers have begun to study information granularity, which simulates human cognition to handle complex problems. Granular computing advocates observing and analyzing the same problem at different levels of granularity. Coarser granularity leads to more efficient learning processes and stronger robustness to noise, whereas finer granularity is able to capture more detailed characteristics of objects. Selecting appropriate granularity according to different application scenarios can therefore solve practical problems more effectively. This paper proposes a novel support vector regression algorithm via granular computing approach, which constructs regression models using granular balls generated from the dataset as inputs rather than individual data points. First, we analyze the geometric relationship between classification tasks and regression tasks. Then, based on this geometric relationship, we employ twin support vector classification algorithm via granular computing approach to address regression problems.
Authors - Dang Trong Hop, Than Ngoc Thien Abstract - Medical image classification is of immense importance in the context of early-stage diagnosis of various neurological diseases, including Alzheimer’s disease and brain tumours. However, it remains infeasible for conventional deep learning architectures to efficiently encode frequency domain information and long-range spatial dependencies found in medical images. In this paper, a novel Hybrid Wavelet CNN Vision Trans-former, coupled with Explainable Artificial Intelligence, has been proposed for efficient and accurate medical image classification. In the proposed architecture, the application of discrete wavelet transform, convolutional neural networks, and Vision transformers for medical image classification has been presented. Additionally, explainability aspects have been addressed using the Grad-CAM technique. The proposed model was experimented with using two datasets: one for Alzheimer’s disease MRI and another for brain tumours. The experimental results reveal that the proposed deep learning architecture achieves an accuracy of 96.8%, precision of 0.96, and recall of 0.97, F1score of 0.97 for the brain tumours dataset, which beats conventional CNN, vision Transformer, and Wavelet CNN architectures. The integration of explainable AI further enhances model transparency and clinical reliability, making the proposed framework suitable for real-world medical diagnostic applications.
Authors - Neha Aggarwal, Rajiv Singh, Swati Nigam Abstract - One advantage of using Large Language Models (LLMs) is the automation of tasks and the analysis of information. Engineering drawings, on the other hand, are standardized representations of products; they document their dimensions and geometries. Users can utilize them for manufacturing parts, assembly guides, and engineering analysis, among other uses. This article aims to 1) evaluate whether an LLM is capable of interpreting engineering drawings, 2) identify how it interprets them, as it may use a standard on which the generation of these drawings or the interpretation of images is based, and 3) determine if users as students can employ LLMs as a guide to interpret drawings. The results showed that the user requesting an interpretation of an engineering drawing must be familiar with the field, as the LLM sometimes fails to extract the correct in-formation from a drawing; furthermore, any detail in the drawing can confuse the LLM. Once the LLM extracts the correct information from the drawing, it can use it to generate CNC code to machine a part, predict its behavior using a neural network, or perform engineering analysis, to name just a few examples.
Authors - Muhammad Elfata Rasyid Hammuda, Irmawan Rahyadi Abstract - Inventory management in warehouse environments frequently faces recurring limitations related to material searching, manual record updating, and control inconsistencies, which increase delays and disrupt operational continuity. This study develops an intelligent stock-tracking system based on weight sensing using load cells, signal conditioning through the HX711 module, and processing via an ESP32 microcontroller, with real-time data transmission using MQTT and visualization through a Unity-based mobile application with augmented reality (AR) support. The study included the diagnosis of the current process through process mapping and ABC analysis to prioritize critical consumables, the design of the system architecture, the implementation of the IoT prototype and its integration with the AR interface, and performance evaluation through time comparisons, before-and-after record analysis, and administration of the System Usability Scale (SUS) questionnaire. Findings indicate operational improvements in efficiency and record consistency, along with a favorable perceived usability among the evaluators.
Authors - Ikram Ahamed Mohamed, Hafiz Abdulla, Mohaideen Mohamed Mohabilasha, Fiyaz Ahmed, Pankaj Chandre, Rohini Bhosale Abstract - The Electric vehicles (EVs) are one way to help the environment by reducing carbon emissions and aiming for the net zero supply chain in logistics. This paper is a complete readiness assessment frame work for the green logistics practices on using electrics vehicles. The method categorizes preparation factors in five key categories, i.e., strategic and governance commitment, technological and infrastructure capability, financial and Investment Capacity, operational and human resource readiness, and environmental and policy alignment. It is pro-posed to use a multi-criteria decision-making framework to analyze the relation-ship between these variables and quantify the level of organizational readiness by using language evaluation scales converted to fuzzy numbers. The study con-tributes to the theoretical knowledge of creating a unified property of the various readiness criteria in a unitary evaluation framework and synthesises empirical methods with a measurable metric of the uptake of the electric vehicle in the logistics networks. Practically, the framework assists logistics managers, legislators, and sustainability planners in identifying issues, establishing priorities on investments, and accelerating the transition to the low-carbon transportation systems. The findings support the concept of fact-based decision-making that can lead to a green logistics revolution which can expand and remain sustainable.
Authors - Sabid Rahman, Sadah Anjum Shanto, Segufta Nasrin Tamanna, Zurin Alam Aongon, Md. Soadul Islam, Nasirul Islam Abstract - This research suggests a system for the real-time detection of road hazards, specifically potholes, cracks, and open manholes, using deep learning and image processing, and pinpointing the exact geographical location of the defects. These defects can cause road accidents, vehicle damage, traffic congestion, and other inconveniences. To solve these, a YOLOv8m model integrated with the CBAM module was developed for enhanced feature attention and trained on a custom dataset of 2,400 road images containing the three hazard classes. The model achieved a mAP@50 of 82.2%, and the individual class performance scores are 72.2% for potholes, 81.0% for cracks, and 93.3% for open manholes, and a recall of 76.4%, demonstrating reliable performance under varied conditions. An OCR module was integrated with the CBAM-YOLOv8 model to extract GPS coordinates from user-captured photos and videos, and an interactive mapping interface was designed to show and report the exact locations of detected hazards for timely action by authorities.
Authors - Mandar K Mokashi, Sonali P Bhoite, Vishal Nayakwadi, Atul P Kulkarni, Parikshit Mahalle, Pankaj Chandre Abstract - The purpose of this study is to examine the impact of DAT, AIR and ICM to-ward DAM in SMEs and at the same time determine the moderating effect of in-ternal control maturity. Drawing on the technology–organization–environment (TOE) framework and Resource-Based View (RBV), this study utilises a quantitative approach by employing Partial Least Squares Structural Equation Model-ling (PLS-SEM). Data was collected through structured questionnaires sent out to SMEs that have begun using digital audit tools. The relationships with DAM of DAT, AIR and ICM presented evidence on the individual impact on DAM indicating that technological readiness, organizational willingness to accept AI solutions successfully and mature internal controls are vital. Nevertheless, internal control maturity is not conducive to stronger.
Authors - Nguyen Thi Hoi, Dao Thi Huong Abstract - This аrticle exаmines the impаct of аccelerаted digitаlizаtion of the Uzbek econo-my on improving the effectiveness of pаrticipаtory budgeting. Reforms аimed аt creаting а "New Uzbekistаn" hаve elevаted pаrticipаtory budgeting to а key tool for citizen engаgement аnd increаsing the trаnspаrency of budget аllocаtion. However, the complexity аnd multifаceted nаture of this work аnd the further de-velopment of pаrticipаtory budgeting require the constаnt аdаptаtion of proce-dures, tools, аnd mаnаgement аpproаches to the emerging digitаl reаlities. The purpose of this study is to substаntiаte the need to trаnsform the pаrticipаtory budgeting mechаnism using аrtificiаl intelligence technologies аnd propose prаcticаl solutions to improve the efficiency, fаirness, аnd sustаinаbility of this process. Bаsed on аn аnаlysis of the regulаtory frаmework аnd current prаctices in implementing pаrticipаtory budgeting projects in the Republic of Uzbekistаn, key chаllenges limiting the potentiаl of pаrticipаtory budgeting hаve been identi-fied, including: low digitаl literаcy аmong some of the populаtion, limited func-tionаlity of digitаl plаtforms, insufficient аutomаtion of project evаluаtion аnd se-lection processes, weаk integrаtion with government informаtion systems, аnd а lаck of аnаlyticаl tools for forecаsting sociаl performаnce. The study proposes аreаs for improving the mechаnism, including expаnding the functionаlity of the Open Budget plаtform, implementing аrtificiаl intelligence, big dаtа, аnd digitаl plаtforms to increаse the openness аnd effectiveness of аnаlyticаl dаtа, аs well аs using elements of finаnciаl modeling to forecаst future stаte budget expenditures аnd develop multifаctor criteriа for аssessing the effec-tiveness of pаrticipаtory budgeting projects. The prаcticаl significаnce of the аrti-cle lies in the development of а comprehensive аpproаch to modernizing pаr-ticipаtory budgeting, which contributes to increаsing citizen trust in government institutions, optimizing the use of budgetаry resources, аnd аchieving the goаls of the Digitаl Uzbekistаn 2030 strаtegy. The results obtаined cаn be used by gov-ernment аgencies, locаl governments, аnd developers of digitаl solutions in public finаnce.
Authors - Humma Ghaffar, Usman Ali, Muhammad Arfan, Sajid, Muhmmad Mujeeb Akbar Abstract - The growing mental health challenges around the globe need access to scalable, available, and safety conscious digital interventions. The paper describes a mental health support platform, based on AI, which combines conversational intelligence, multi-therapeutic persona modeling, structured mood analytics, proactive crisis identification, multi-lingual interaction, and voice-based access in a secure full stack design. The system, which runs on the Google Gemini AI, provides context-sensitive therapeutic dialogue and performs four-dimensional mood analysis of anxiety, stress, depression, and wellbeing, allowing longitudinal assessment by providing interactive dashboards and automated reporting. A safety-first crisis override system offers validated emergency capacity in the high-risk situations. The platform also includes multilingual voice feedback to facilitate inclusion of the visually impaired users and non-English speaking communities in providing inclusive digital mental health care. The proposed system is capable of changing the prevalent perception that AI and its applications may never be responsible and scalable because it integrates therapeutic diversity, structured analytics, accessibility features, and proactive safety controls into a single framework.
Authors - Pranay Kavthankar, Rutuj Koli, Ronit Ghadi, Yug Mora, Abhijit Joshi Abstract - Speech-to-Speech Translation (S2ST) has evolved from cas caded pipelines into end-to-end neural architectures. However, preserv ing emotion, prosody, and speaker identity across languages remains challenging. This survey examines state-of-the-art emotion and identity preserving S2ST and neural TTS systems, covering discrete-representation models, end-to-end systems, and cascaded pipelines. We analyze architec tures including Translatotron, VQ-Translatotron, SeamlessM4T, VALL E, VALL-E X, VITS, YourTTS, StyleTTS2, and XTTSv2. The survey discusses speaker identity preservation (x-vectors, d-vectors, codec repre sentations), prosody modeling (pitch, duration, energy), emotion reten tion (categorical, dimensional, embeddings), datasets, evaluation met rics, and challenges including data scarcity, cross-lingual emotion trans fer, and computational costs. We propose future directions toward large scale expressive datasets, improved cross-lingual modeling, and respon sible AI practices.
Authors - Maykin Warasart, Pallop Piriyasurawong, Panita Wannapiroon, Prachyanun Nilsook Abstract - This paper introduces an AI-based investment assistant that helps users to understand the fundamental principles of the financial markets. This work is mainly focused on stock market data to provide accurate insights and helps in various decision-making purposes. The rising volatility in the financial markets, massive data set, and the complexity of financial instruments, makes decision-making in financial sectors more difficult to individual investors.In order to cope with this problem, our model integrates time series forecasts, large language model intelligence with real-time financial information with interactive visualizations and personalized insights. The suggested system will interpret user queries in natural language with the help of a Large Language Model (Gemini 2.5 Flash) and extracts the corresponding stock tickers and financial objects and transforms them into structured inputs to be used in predictive analysis. Past and current stock market data are retrieved with the help of yfinance API and fed into an LSTM-based time-series predictive model that predicts future price fluctuations.The results predicted are presented in interactive charts created with Plotly, which users can analyze trends easily and compare several stocks. The system can also give personalized recommendations, textual summaries of stock movements (moving up or down), multi-turn chatbot conversations, portfolio, wishlist and real time price moves besides forecasting. The proposed investment assistant improves the gap between complicated financial information and practical results by incorporating natural language comprehension, deep learningbased prediction, and intuitive visualization etc. The system promotes user knowledge and helps them in effective decision making .
Authors - Gabriel M. da Silva, Nicolas O. da Rocha, Heloise V. C. Brito, Joao V. N. M. da Silva, Sergio A. S. da Silva, Anderson R. de Souza, Carlos A. O. de Freitas, Vandermi Joao da Silva Abstract - Spiking Neural Networks (SNNs) have been investigated as a biologically inspired alternative for efficient information processing, particularly in energy-sensitive applications. This work presents a comparative evaluation of the energy efficiency of different SNN techniques, including Liquid State Machines (LSM), Recurrent Spiking Neural Networks (RSNN), Spiking Convolutional Neural Networks (SCNN), and learning based on Spike-Timing Dependent Plasticity (STDP). The experiments were conducted on conventional hardware plat-forms, namely an Android smartphone and a notebook, using simulated implementations of SNNs without dedicated neuromorphic acceleration. The analysis considered different network scales by varying the number of neurons and was based on neural activity metrics, particularly the total number of generated spikes, employed as a proxy for the indirect estimation of energy consumption during audio signal processing. The results demonstrate a consistent relationship between neural activity and estimated energy consumption, as well as an energy saturation behavior as network complexity increases. Differences among the an-alyzed techniques are more pronounced in small-scale configurations, whereas larger networks exhibit convergent patterns of neural activity and energy consumption. Although conducted in a digital simulation environment, this study highlights the limitations of conventional platforms for the efficient execution of SNNs and reinforces the potential of dedicated neuromorphic hardware for embedded and low-power applications.
Authors - Maykin Warasart, Veerasith Wongkarn, Phonesavanh Nammakone, Duangtavanh Thatsaphone Abstract - Manual correction of written examination scripts is still the default practice in many institutions, but it is slow, tiring for evaluators, and not always consistent, especially when large numbers of papers must be graded in a short time. In this work we look at how recent advances in optical character recognition (OCR), machine learning (ML), and natural language processing (NLP) can be used together to support automatic evaluation of both objective and descriptive answers. In this paper We study a two–stage system: first, a handwriting recognizer based on convolutional and recurrent neural networks (CRNN) is used to read handwritten responses from scanned answer sheets; next, the recognized text is scored using semantic and syntactic similarity measures driven by transformer-based language models. By training the recognizer on a mixture of public handwriting corpora and locally collected scripts, and by combining keyword features with sentence-level embeddings, the system is able to approximate faculty grading patterns with good accuracy. This study examines the way that real tests are administered, including variations in writing styles, background noise in scans, the arrangement of answers on paper, and terms related to specific subjects. We clearly address each of those factors in our approach. Teachers won’t vanish because of this setup; instead, it aims to ease their ongoing tasks while offering fairness and consistency across student results.
Authors - Hai D. Nguyen, Nguyen Ngoc Quan, Viet H. Le, Mai T. Nguyen, Nguyen Huy Trung, Le Duc Huy, Nhu Son Nguyen Abstract - Military forces launch offensive operations to defeat and destroy enemy. Battlefield surveillance enables provisioning of timely and correct battle space information to commanders, both prior and during the launch of offensive operations. Static battlefield surveillance devices have certain limitations which restrict their usage during offensive operations. In the current paper, we review the requirement of surveillance devices during various periods of offensive operations, the limitations of static surveillance devices and efficacy of Unmanned Aerial Vehicles (UAVs) as prime battlefield surveillance device for offensive operations. We then explore the possibility of connecting UAVs with existing cellular base stations and with vehicle mounted cellular base stations which can be moved into enemy territory with the progress of offensive operations. Furthermore, a UAV communication model for enhanced battlefield surveillance during offensive operations is presented after analyzing various antenna techniques utilized to achieve desired data rates for UAV operations.
Authors - Quan Nguyen, Chau Vo, Phung Nguyen Abstract - In order to create reliable connectivity where there is no direct line-of-sight (LOS) path between ground terminals, this study provides the design and performance evaluation of a dual-hop Unmanned Aerial Vehicle (UAV) assisted free space optical communication system. The proposed ground–UAV–UAV–ground architecture enables non-LOS communication by employing aerial relays to bypass physical obstructions and extend transmission coverage. Three modulation formats—Non-Return to Zero (NRZ), Return to Zero (RZ), and Carrier-Suppressed Return to Zero (CSRZ)—under various weather conditions and turbulence regimes are used to assess the system performance. While all modulation schemes perform closely for different attenuation level, differences in performance is prominent under turbulence, CSRZ demonstrates superior robustness, followed by NRZ and RZ.
Authors - Rajesh Kapoor, Vishal Goyal, Aasheesh Shukla Abstract - This paper presents a systematic review of visual sarcasm detection research with a focus on learning-based approaches. The review examines input representations, feature extraction methods, model architectures, datasets, and evaluation practices reported in the literature. Studies are analyzed with respect to the use of visual information, including images and image–text pairs, along with associated deep learning frameworks such as convolutional, transformer-based, and hybrid models. A structured search strategy, defined inclusion criteria, and an analytical framework are employed to ensure consistency and reproducibility of the review process. The findings are synthesized to identify prevailing research patterns, methodological limitations, and gaps related to visual feature representation, model design, and experimental consistency. By organizing and comparing existing approaches, this systematic review provides a consolidated reference and supports future research in visual sarcasm detection.
Authors - G. Sabera, Kanajam Murali Krishna, N. Sabitha, Tummala Purnima, A. Naresh, Shaik Janbhasha Abstract - Complementing the continuous deep integration of culture and tour-ism, the tourism market environment and visitor consumption demand are constantly evolving, with cultural theme attractions playing an increasingly prominent role in tourism industry development. Tourism resources constitute the basic foundation of scenic destination development, while scientific and effective tour-ism marketing provide a key factor in enhancing market competitiveness and achieving sustainable development. Relying on the cultural resources of the Song Dynasty and martial arts culture, The Song Dynasty of Kungfu City has formed a distinctive thematic identity against the background of cultural–tourism integration and has gained a particular level of market attention. However, its tourism marketing practices still face practical challenges such as brand strengthening, intensified market competition, and changing visitor expectations. This study takes The Song Dynasty of Kungfu City as the research object and analyzes the current status of its tourism marketing, exploring the developmental foundation and practical challenges faced by the scenic area under the contemporary tourism market environment. A qualitative research approach is adopted. Relevant data were collected through field observation and in-depth interviews to review the scenic area’s tourism marketing activities. Based on this, the SWOT analytical framework was applied to systematically examine the strengths, weaknesses, opportunities, and threats associated with the tourism marketing status of The Song Dynasty of Kungfu City.
Authors - Sambhram Pattanayak, Akankasha Kathuria, Shreesha Mairaru Abstract - Reliable prediction of rare critical events is a key enabler for modern risk management, civil protection, and decision support sys tems, yet it remains challenging due to extreme class imbalance and strict requirements on false alarm rates. We present an ensemble learn ing framework that combines a deep feed-forward neural network with a Random Forest classifier, complemented by temporal feature engineering and precision-oriented optimization. The approach addresses three ob jectives: extracting informative temporal and regional patterns from raw event logs, learning calibrated probabilistic scores under severe imbalance using focal loss, and tuning per-region decision thresholds to achieve high precision while preserving acceptable recall. As a case study we apply the framework to air alert prediction over 25 administrative regions across 38 months, totalling 774,125 hourly observations. The system attains 96.13% accuracy, 75.1% precision, and 77.9% recall, demonstrating that high-precision early warning is feasible in strongly imbalanced settings. The framework is applicable to a wide range of safety-critical rare event prediction tasks.
Authors - Neeraj Mathur, Jiby Mariya Jose Abstract - Material Control Systems (MCS) serve as a critical software layer that coordinates material flow by issuing transport commands, tracking material lo-cations, and interfacing with factory equipment and automated handling systems. Although the term may appear to focus primarily on inventory management, it is most commonly used in high-tech environments such as semiconductor manufacturing to describe the software layer that manages, directs, and optimizes the movement, storage, and routing of materials (e.g., wafers and carriers) within a production or logistics environment. This paper presents the development and implementation of a novel Physical AI–based Material Control System. Unlike traditional MCS architectures that rely on rigid rule-based dispatching, the proposed approach leverages a Physical AI plat-form to enable unified and adaptive control across heterogeneous hardware, including stockers, Autonomous Mobile Robots (AMRs), and Overhead Hoist Transport (OHT) systems. By integrating real-time sensor fusion and adaptive motion planning, the proposed system enhances process logistics in semiconductor backend facilities, where high-mix production requires highly dynamic coordination between storage and transport resources.
Authors - Maryam Ghazi Ali, Bindu V. R Abstract - The Internet of Things (IoT) has spread rapidly, significantly increasing several secu-rity vulnerabilities, as traditional detection systems are becoming insufficient to manage the vol-ume and diversity of traffic that characterizes modern networks. The review provides a compre-hensive analysis of recent advances in learning-based intrusion detection systems (IDS), focusing primarily on deep learning, traditional learning, machine learning, and hybrid frameworks. Through critically evaluating a diverse range of state-of-the-art studies, the review explores dif-ferent methodological solutions, data, and performance measurement in the field. The available empirical results show that, although deep learning models are better at identifying complex pat-terns in the data, traditional machine learning algorithms require less computational power. In addition, hybrid and ensemble models often outperform single-method options, but often with high computational cost. The review outlines a number of important challenges, including the issue of class imbalance and the fact that models are not very interpretable. It argues that light-weight and interpretable AI systems should be a priority in future studies, and the gap between theoretical academic frameworks and practical IoT applications would be minimized.
Authors - Aditi Jha, Ravi Shankar Pandey Abstract - Indoor air quality (IAQ) is a frequently overlooked determinant of health in rural villages, where the extensive use of solid fuels for cooking and space-heating generates elevated concentrations of airborne pollutants. This study presents an integrated, low-cost protocol for improving IAQ in rural dwellings, combining real-time environmental monitoring, simplified digital modelling and passive strategies of ventilation and biophilic design. The methodology can be structured into three steps: Conceptual digital twin, feedback interface, ventilation strategies, biophilic integration. Conceptual digital twin is based on the mapping of each dwelling linked to Arduino low-cost, stand-alone sensors (CO₂, PM₂.₅, temperature and relative humidity) that collect data at temporal resolution of one minute. An immediate feedback interface based on visual and/or acoustic indicators that prompt residents to take corrective actions (selective opening of windows, activation of cross-breezes), when exposure thresholds - derived from WHO Air Quality Guidelines - are exceeded. Data-driven natural-ventilation strategies – optimal ventilation windows identified through time-series analysis of sensor data, calibrated to local weather conditions and occupancy profiles to maximise air exchange while minimising heat losses. Biophilic integration implies the introduction of resilient plant species with proven phytoremediation capacity, as Epipremnum aureum) which could reduce CO₂ level, with quantitative guidance on density (two to three plants per main room) and optimal placement. Using low-cost IoT sensors, the protocol monitors environmental parameters and pollutant concentrations in real time. The system targets specific safety and comfort thresholds, aiming to maintain CO₂ levels below 700 ppm and PM₂.₅ below 50 μg/m³ to optimize occupant health (Wu et al, 2021). These thresholds, derived from World Health Organization (WHO) guidelines, are essential to ensure occupant satisfaction and well-being. The ultimate objective is to define a scalable and replicable intervention model capable of combining digital technologies and natural solutions for the sustainable regeneration of fragile territories.
Authors - Atul Pawar, Ganesh Deshmukh, Rajesh Lomte, Sahil Ambokar, Vedant Bankewar, Sanket Ahirrao Abstract - This study explored teachers’ perspectives on the need for an interac tive digital storytelling application to support English language learning at the primary level. Using a teacher-based needs analysis, data were collected through expert review of research instruments and in-depth interviews with English teachers working in international school contexts. The findings reveal that teach ers perceive digital storytelling as an effective approach for enhancing student engagement, motivation, and contextualized language learning. Teachers high lighted the importance of integrating interactive elements such as narrative audio, visuals, game-based tasks, immediate feedback, and reward systems to support vocabulary development, comprehension, and learner autonomy. The results also indicate a need for applications that are curriculum-aligned, age-appropriate, and easy to use in classroom settings. Based on the identified needs, the study pro vides design implications for the development of an interactive digital storytell ing application that combines storytelling and game-based learning principles. This research contributes to the growing body of literature on digital storytelling and offers practical guidance for educators and developers seeking to design ef fective language learning applications.
Authors - Veenu Singh, Saurabh Singhal Abstract - Many AI agents store observations, summaries, and retrieved content in persistent memory, then reuse that material in later planning and action. This creates a failure mode that standard incident response does not fully address. If malicious content is written into durable memory, patching the vulnerable component, rotating credentials, and restarting the agent do not remove the poisoned state. The agent can restart clean, retrieve the same memory, and act on it again. We call this provenance laundering: external-origin content is later consumed with authority it should not have. We formalize this mechanism, show that remediation without memory purge leaves residual impact over time, and examine seven production memory architectures against this threat model. We then define a containment primitive based on provenance metadata, namespace separation, and an inference-time non-escalation gate, and evaluate it with ablation across two frameworks. In our experiments, unauthorized behavior persisted after standard remediation and stopped only after memory purge. These results suggest that incident response for persistent-memory agents should treat purge as a required step rather than an optional cleanup action.
Authors - Nitesh Varman V R, Sanjith Ganesa P, Rahul Veeramachaneni, Korapati Mohan Aditya, Bagavathi Sivakumar Abstract - With the development of cloud computing and big data technology, data handling particularly in handling big data, while also mentioning the dangers of privacy and security violations in delegating the processing of sensitive data to cloud computing has increased. The conventional encryption method that demands the decryption of data for processing, which could result in the leakage of sensitive data and performance inefficiencies are no longer valid. The paper introduces the Optimized Privacy-Preserving Cryptographic Processing Algorithm (OPCPA), which reduces computational complexity through the use of light-weight encryption, adaptive data partitioning, hierarchical key management, and parallel processing of encrypted data. The proposed algorithm is compared to conventional methods using the KDD Cup 1999 dataset and outperforms them in terms of processing speed, throughput, and resource utilization.
Authors - Kashish Goyal, Parteek Kumar, Karun Verma Abstract - The clinical deployment of continuous epileptic seizure forecasting systems is severely hindered by the cold-start problem. Current state-of-the-art deep learning models require patient-specific fine-tuning, necessitating the recording of multiple seizures from a newly admitted patient before the system becomes operational. To achieve immediate clinical utility, forecasting models must operate in a zero-shot capacity. This paper presents a Zero-Shot Cross-Patient Transfer Framework, leveraging the Horizon-Aware Graph Transformer as a universal feature extractor, coupled with the Strict Discipline Protocol as a rigid domain adaptation layer. By anchoring the batch normalization layers to a global source distribution and utilizing a brief interictal calibration phase, the framework mitigates the severe covariate shift inherent in cross-patient electroencephalogram signals. Experimental validation on the CHB-MIT dataset demonstrates a sensitivity of 87.3% with a false alarm rate of 0.28 per hour, achieving a Time-to-Utility of exactly 10 minutes, a 99.9% reduction compared to conventional patient-specific approaches requiring 5-14 days of monitoring. The framework successfully bypasses patientspecific training, offering immediate clinical interoperability while minimizing alarm fatigue through disciplined feature scaling.
Authors - The Quan Trong, Nguyen Trong Nhan Abstract - The integration of large language models (LLMs) into primary educa tion remains limited in low resource, diglossic languages like Sinhala. General purpose models often produce grammatically inconsistent or cognitively over whelming output for young learners. This paper introduces a grade-adaptive, con straint-driven framework for automated Sinhala story and quiz generation target ing Grades 1-5. Building upon an 8-billion-parameter Sinhala-adapted LLaMA 3 model, we apply Quantized Low-Rank Adaptation (QLoRA) using a curated multi-task educational dataset. The system enforces tier-specific linguistic con straints separating conversational Sinhala for lower grades from formal written Sinhala for upper grades while embedding strict structural rules such as con trolled sentence counts (5-6 vs. 7-8) and validated multiple-choice formats (3 vs. 4 options). Evaluation on 100 structured prompts demonstrated substantial im provements over a zero-shot baseline: structural compliance increased from 64% to 93%, and hallucination-related failures decreased from 31% to 8%. Further more, evaluation against 50 unseen real-world classroom prompts yielded a 0.0% crash rate and 95% register adherence, confirming robust qualitative perfor mance. Results demonstrate that diglossia-aware dataset engineering and con straint-aware fine-tuning enable reliable, pedagogically aligned deployment of LLMs in low-resource primary learning environments.
Authors - Maria Veronica Alderete Abstract - This study extends the empirical literature on the relationship between intention to use Artificial Intelligence (AI), the digital divide, and regional ine-qualities in Latin America. To the best of our knowledge, no prior research has examined the AI gap by combining data at the subnational (regional) level across countries. The analysis relies on a sample of 208 regions from 10 Latin American countries. A structural equation model is estimated to assess the relationships among digital infrastructure, socioeconomic factors, and intention to use ChatGPT. The results show that household internet access has a positive and statistically significant effect on intention to use ChatGPT. Data center presence indirectly re-inforces AI intention use through its positive association with internet access, while rurality exerts a negative effect. Education levels and platform-based em-ployment (e.g., Uber) are also positively associated with intention to use AI. The findings suggest that AI adoption is structurally conditioned by foundational digi-tal infrastructure, regional human capital, and exposure to platform-based labor markets. Although the expansion of the gig economy fosters intention to use AI, AI diffusion simultaneously increases the importance of formal education.
Authors - Donald Flywell Malanga, Wallace Chigona Abstract - Mobile Health (mHealth) has been regarded as a potentially transform-ative element for enhancing health service delivery in low-income nations. The effective integration of technology relies on ongoing usage rather than just initial acceptance. While the body of literature on factors influencing continued mHealth use is expanding, post-adoption expectations are proposed as indicators of the success or failure of mHealth implementation. There is limited research on how community health workers' post-adoption expectations influence their inten-tions to persist in using mHealth in developing regions. Consequently, this study explores the effect of post-adoption expectations on satisfaction and ongoing us-age behaviour regarding mHealth among community health workers in Malawi, which represents a developing country context. The research introduces a frame-work that builds upon the expectation confirmation model and incorporates ele-ments from the updated information success model. A mixed-methods conver-gent design was utilised for the study. Data were collected through surveys and semi-structured interviews with community health workers who utilise Cstock. Cstock is an mHealth application that facilitates the ordering of medical supplies via text message. The findings generally support the notion that post-usage use-fulness, along with information quality, system quality, and service quality, pos-itively influences community health workers’ satisfaction and their intention to continue using the Cstock application. The results indicate that the ongoing usage behaviour of mHealth among community health workers is shaped not solely by behavioural expectation beliefs (i.e., post-usage usefulness) but also by objective expectation beliefs, including system quality, service quality, and information quality. Therefore, these findings provide valuable insights to policymakers, practitioners, mHealth developers, and other relevant parties regarding the post-user expectations essential for maintaining future mHealth solutions in develop-ing countries, particularly in Malawi.
Authors - Hemamalini Siranjeevi, Swaminathan Venkatraman, Dharshini V, Gayathri A, Sushma Sri R Abstract - Urban environments generate massive video data from surveillance and mobile sensors, necessitating efficient and intelligent summarization for smart city and transportation systems. This paper proposes a multimodal video summarization framework that moves beyond object-centric analysis toward high-level urban scene understanding. Unlike traditional methods that rely on low-level visual features or isolated object detection, the proposed approach captures contextual relationships and temporal continuity through a multi-stage pipeline. The system integrates multimodal perception, combining deep learning-based object detection, multi-object tracking, and acoustic analysis to preserve entity identities and environmental context. We employ relational inference and motion heuristics to model spatial and semantic interactions, which are then structured into a Dynamic Knowledge Graph (DKG) representing entities, interactions, and temporal events. A semantic synthesis module, powered by a transformer-based language model, generates concise, coherent, and semantically meaningful summaries. This architecture enables scalable, context-aware video summarization adaptable to real-world urban applications.
Authors - Nithin Gattappagari, Lakshmi Sagar S, Reddy Lokesh K, Banu Prakash N, Asritha A, Varalakshmi U, Karthik P, Praveen Kumar Rayani Abstract - Conventional one-time authentication cannot prevent session hijacking after login. This paper proposes a session-level impostor de tection framework based on Siamese learning over mouse dynamics for continuous authentication. The model combines statistical behavioral de scriptors with lightweight temporal modeling (Conv1D+GRU) to learn compact embeddings for open-set verification. It supports one-shot en rollment by comparing a query session against a single verified reference session and stores non-reversible embeddings instead of raw trajectories to improve privacy. We evaluate on Balabit and SAPiMouse under se vere class imbalance using balanced batching, semi-hard negative mining, and focal contrastive loss. The framework achieves AUROC 0.95/0.96, F1 0.80/0.85, and accuracy 0.92/0.93, with 46K trainable parameters and approximately 15ms inference time, indicating practical deployment potential.
Authors - Rishav Kumar Agrawal, Maharshi Bhowmick, Mir Abbas Hussain, Sachin, Vaishali Shinde Abstract - This paper presents a platform for scalable validation, visu alization, and explanation of synthetic tabular data in a rigorous and operationally practical workflow. The system integrates statistical test ing, dimensionality reduction, anomaly detection, and AI-assisted in terpretation into a single analysis pipeline. Through an insurance-data case study, we show that the platform can detect subtle distributional artifacts, support utility–privacy trade-off assessment, and provide in terpretable evidence that is difficult to obtain from isolated univariate checks. We conclude by discussing practical value, current limitations, and directions for future development.
Authors - Rowena Ocier Sibayan, Hazel C. Tagalog, Ronald S. Cordova Abstract - As digital marketing expands in Oman, many organizations struggle to transform large volumes of customer data into actionable insights. This study presents an AI-driven marketing intelligence framework designed for non-technical users, combining automated customer segmentation, sentiment analysis, and personalized recommendations. The framework employs an autoencoder-based feature extraction approach to capture key behavioral patterns, followed by K-Means clustering to define meaningful customer segments (Berahmand et al., 2024). A fine-tuned BERT model analyzes multilingual feedback in Arabic and English to assess customer sentiment (Manias et al., 2023). The framework was evaluated using 12 months of campaign data from 450 customers across multiple Omani businesses. Analysis revealed four distinct customer groups and an overall positive sentiment of +0.55. Controlled A/B experiments demonstrated that AI-guided campaigns outperformed traditional methods, increasing conversion rates by 27%, improving retention by 15%, and generating a threefold return on marketing spend. These results indicate that accessible AI tools can deliver measurable marketing benefits in emerging markets and provide a scalable solution for Gulf-region businesses.
Authors - Maria George Anthraper, Kusuma Sanjaykumar, Sinchana K C, V R, Badri Prasad Abstract - Post-quantum migration is increasingly constrained by time: deployed cryptographic mechanisms may need to be retired, hybridized, or re-keyed before effective security margins fall below asset-specific pol icy thresholds. This timing problem is complicated by uncertainty in clas sical hardware acceleration, algorithmic progress, implementation ero sion, and the arrival of cryptographically relevant quantum comput ers. This paper presents a compact probabilistic pipeline that translates evolving assumptions and evidence into decision-facing migration guid ance. The approach couples three layers: (i) a security-trajectory model that encodes expected margin erosion under scenario parameters, (ii) a latent-regime model that represents partially observed risk states and updates them as evidence changes, and (iii) an option-style timing layer that quantifies the diminishing value of delaying migration as thresholds approach. Outputs are conditional on stated assumptions and are in tended to be reported with sensitivity bands and lead-time constraints. In practice, the pipeline is intended to be re-run as assumptions and evidence evolve, preserving an auditable trail from scenario inputs to in termediate states and final decision artifacts. The primary deliverables are comparative rankings and conservative “start-by” windows under stated assumptions, rather than single predicted break dates.
Authors - Jayalakshmi D, N. Priya Abstract - Online product reviews play a key role in the success or failure of an e-commerce business. Often, online reviews from previous customers provide buyers with detailed advice about the product and help them decide before purchasing a product or service. However, some e-commerce products can be promoted or damaged by fraudsters who post fake reviews. Synthetic Reviews (SRs) have the capacity to deceive consumers, influence purchasing decisions, and lead to losses. Thus, SRs pose a significant risk to e-commerce companies and content creators, undermining consumer loyalty and brand reputation. Specifically, the development of AI-generated fake reviews has made them harder to detect, as they are very similar to human-written texts. This review paper presents a Deep Learning (DL)-based framework that offers comprehensive insight into fraud and synthetic review detection in an evolving e-commerce environment. This review paper discusses the importance of DL for detecting online product fake reviews in sentiment analysis using various approaches based on Graph Convolutional Network (GCN), Hierarchical Graph Attention Network (HGAN) Sentiment Majority Voting Classifiers (SMVC), Convolutional Neural Networks with Bidirectional Long Short-Term Memory Networks (CNN-Bi-LSTMs), and a proposed Optimized Bidirectional Encoder Representation Transformers (OBERT) model. This review paper focused on the importance of DL models, particularly the GCN, for effective identification of fake online reviews. This review paper proposed a DL algorithm for fake review detection in online products and demonstrated its practical application in a real-world scenario.
Authors - Miroslav Cech, Rastislav Roka Abstract - Private 5G networks require a reliable, high-capacity, and secure transport infrastructure, especially in industrial and critical applications. Free Space Optics is a promising solution enabling multi-gigabit transmissions with low latency and increased physical security. The article analyses the possibili ties of integrating FSO technology into Standalone Non-Public Network and Public Network Integrated Non-Public Network architectures and evaluates the role of FSO links as a transport or interconnection layer and their impact on la tency, reliability, and security for 5G services such as eMBB, URLLC, and mMTC. The article then summarizes current research trends, including the use of artificial intelligence and machine learning to optimize FSO-based transmission.
Authors - Tanmoy De, Vimal Kumar, Pratima Verma Abstract - The process of operating modern engineering companies is often compartmentalized due to the straightforward nature of the operations requirements that mani-fest themselves within the realm of the software creation and hardware manufacturing. The absence of integration between Agile practices and Waterfall lifecycles is a waste of administrative resources and delays time-to-market. A hybrid project management SaaS is offered in this project called Converge, which will target the integration of these areas without sacrificing the integrity of the data stored in digital code repositories and physical Bill of Materials (BoM). The adoption of Multi-Modal Documentation, Real-time State Synchronization and IoT-oriented Task Automation have their measures of efficiency of workflow, responsiveness of interface, and cross-domain data consistency. The most recent breakthroughs in Natural Language Processing (NLP) and Computer Vision are used to make the experience more practical; a custom AI pipeline based on the ResNet50 and LSTM networks are able to extract visual storyboards of technical video reports with an impressive F Score of 83.00% (with 79.20% Precision and 86.50% Recall), and Transformer based models (including BART) are able to generate structured textual summaries with the leading ROUGE-L score of 0.42. The system is anchored on a dynamic split-brain architecture to display coherent information in either Kanban boards or Gantt charts as the case arises. Status updates increase exponentially with integrated IoT triggers to computerize the execution of tasks via a direct hardware to software communication. The survey is based on the trade offs between the flexibility of UI, the complexity of the database schema, and the latency of the API to compare the old siloed tools to this new hybrid framework. The future of engineering management relies on new tendencies, such as Hybrid Machine Learning, to predictively allocate resources, cutting the error rates in estimating the effort by three times (MMRE to 0.32) with the help of such dominant historical measures of resources as Lines of Code (feature importance score of 0.73) and automated reporting of resource depend-ency. Finally, it is demonstrated that the suggested architecture with the support of a CNN optimized backend video storage, which will save 61.80% of the time at a small cost of 2.30% BDBR, will save about 60% of time on manual docu-mentation and synchronize assets in real-time with a latency less than 200ms (2 seconds).
Authors - Dennis A. Dizon, Gleen A. Dalaorao Abstract - Access to formal financial services remains limited in many develop ing regions, largely due to economic and infrastructural constraints. This study uses the ISO/IEC 25010 as the evaluation framework to present a software quality assessment of a lending automation system installed in a financial insti tution in Butuan City, Philippines. The evaluation focuses on five essential as pects of software quality: usability, reliability, functional suitability, perfor mance efficiency, and security. Usability surveys using SUS and UMUX-Lite, operational and performance testing, and an evaluation of security and data privacy compliance were used to gather empirical data. According to the results, the system achieved high performance with an average inference latency of 0.208 ms per record, uptime reliability of ≥99.5%, excellent usability with a mean SUS score of 82.5, and full compliance with data privacy regulations. Predictive analytics, specifically the Random Forest model with isotonic calibration, further enhanced the automated loan assessment’s interpretability and reliability. The system proved that it is appropriate for real-world applications and can encourage financial inclusion in resource-constrained environments, as it exceeded the intended benchmarks for each quality model. To guarantee the long-term adoption of lending automation technologies, the study emphasizes the significance of thorough software quality evaluation in addition to predictive accuracy.
Authors - Nita Dimble, Satish Narayanrav Gujar Abstract - The fabrication of components across various industries is accom plished through welding. Although welding has been practiced for more than a hundred years, defects may still occur during the welding process. Thus, indus trial standards require welded joints to be inspected and evaluated to ensure their quality and reliability. Conventional ultrasonic testing (UT) has long been widely used in industry for detecting and evaluating defects in weld specimens. Over the last few decades, advances in sensor technology and signal analysis techniques have significantly advanced ultrasonic testing methods. Advanced methods, such as Time Of Flight Diffraction (TOFD), are more likely to detect linear defects. However, one of the major challenges in applying TOFD to the inspection of austenitic stainless steel (ASS) weldments is noise in the signals. Various signal processing approaches have been developed to suppress such noise, each with its own advantages and limitations. In this work, the focus is placed on the applica tion of multi-level discrete wavelet transform (DWT) decompositions with ‘n’- order wavelet filters for de-noising ultrasonic TOFD A-scan signals. The results show that this approach achieves greater improvement in signal-to-noise ratio (SNR) while requiring less computational time.
Assistant Professor, Assistant Head Research- Department of Information Technology, Vishwakarma Institute of Technology Pune (Affiliated to Savitribai Phule Pune University, Maharashtra, India
Authors - Selvamani K, Saranraj S, Muthusundar SK, Kanimozhi S, Mohana Suganthi N Abstract - The phishing attack through email remains a significant threat to cybersecurity because the attack has become highly advanced, flexible, and widely spread among individuals and organizations. The phishing tricks, such as personalized social engineering, impersonated identities, and malicious links, have evolved fast and made the traditional email security measures less useful. As such, numerous schemes of email phishing attack detection and prevention have been suggested, combining rule-based approaches with machine learning, deep learning, natural language processing, and sophisticated artificial intelligence systems. This review paper provides a detailed discussion of the currently existing email phishing detection and prevention frameworks, their architectural elements, detection schemes, and preventive schemes. The paper systematically evaluates the conventional, machine learning, and more advanced AI-driven methods with their advantages, weaknesses, and flexibility to the changing phishing threats. The synthesis of existing research trends and unaddressed issues makes the review valuable to researchers and cybersecurity practitioners and will allow building solid, scalable, and intelligent email phishing defense systems.
Authors - Mohanad A. Deif, Mohamed A. Hafez, Samar Mouakket, Mohamed Abstract - Polypharmacy and multiple chronic conditions in older adults increase the likelihood of adverse drug events caused by drug–drug interactions (DDIs) and contraindications. Many clinical decision support systems still have limited ability to use patient context and to exchange knowledge in a consistent semantic form. This study presents a hybrid semantic–linguistic framework for automated DDI detection by combining biomedical natural language processing, ontology-based reasoning, and risk scoring. The framework uses BioBERT to extract relevant information and represents it using RDF knowledge graphs, OWL 2 DL ontologies, and SWRL rules. In an evaluation with 1,000 synthetic patient profiles containing RxNorm-coded medications and SNOMED CTencoded diagnoses, the system identified a wide range of clinically important interaction patterns. Statistical testing showed that age and the number of medications were strongly associated with alert frequency (p < 0.001). These findings suggest that the proposed approach can improve medication safety by providing explainable clinical decision support.
Authors - Selvamani K, Kanimozhi S, Muthusundar S K, Saranraj S, Jagadeesh K Abstract - Multi-object tracking (MOT) is a pillar of many computer vision applications such as video surveillance, self-driving and crowd analysis [1]. The main difficulty does not only exist in correct identification of objects but also in consistent identities of objects in different frames when there is occlusion, camera motion and changes in scene density [14]. The paper introduces a highly advanced MOT system, combining the latest YOLOv8x detector with a modified and improved version of the original ByteTrack association system, which is called RobustBoTSORTTracker [14]. With the new detection quality of YOLOv8x and the robustness of low-confidence detections in ByteTrack, augmented with selective improvements of BoT-SORT including camera motion compensation and exponential moving average smoothing, the proposed system demonstrates significant gains on the MOT15 benchmark [7]. Experimental findings indicate a MOTA of 55.6, IDF1 of 72.2, precision of 74.3 and a recall of 95.7, which is significantly higher than the previous baselines under similar conditions.
Authors - Abhay Saxena, Ankit Kumar, Prasant Kumar Sahu Abstract - In this paper, we address the problem of rainy condition classification in order to allow autonomous systems to ensure safe operation in different weather conditions of rain, especially for drones. The earlier weather condition classification methods are inclined towards using big and computationally costly models and cannot thus be employed in real-time on resource-constrained platforms such as drones and edge devices. The motivation behind this work is to introduce a light-weight, efficient deep model which would be able to classify various rain conditions with low computational cost so that it may be deployed efficiently on low-resource devices. We present a novel CNN architecture and evaluate its performance on a collection of seven distinct rain conditions. The models are bench marked against some of the state-of-the-art pretrained models to demonstrate the compromise between efficiency and accuracy. Performance is evaluated using accuracy, inference time, and model size. The model has accuracy 95.93% with least model size 89.09 KB with inference time of 32.664 ms bridging the gap in lightweight and real-time classification.
Authors - Arjun Verma, D.K. Chaturvedi Abstract - Ethylene and vinyl acetate or EVA is a co-polymer used as a substitute for a lot of materials. EVA is a versatile material and it has a lot of applications ranging from electronics, healthcare, footwear, building applications etc. It is mainly used in sport shoes due to its property to absorb shock impact and insulation properties. In addition, EVA is very cost-friendly, produces no odor, and light in weight material. But with overuse of it, the cellular structure chang-es and can affect the shoes' quality and insulation properties. In addition to the cellular structure, the air molecules present in it also collapse. This paper focus-es on the bonding properties of EVA at different temperatures and its dielectric properties under different operating and manufacturing conditions. The upper, bottom, and sides of EVA shoes are exposed to high voltage till the breakdown. The experimentation was done at Electrical HV laboratory on the university campus where a 100kV HVAC testing system is available. This paper presents the tabulated results on the dielectric strength of EVA shoes under varying operating conditions. Additionally, it examines the bonding properties of EVA shoes at different manufacturing temperatures, aiming to predict their lifespan, quality, and finish. The results of these studies are thoroughly discussed within the document.
Authors - Nasika Ijaz, Farooque Azam, Saliha Ejaz, Muhammad Waseem Anwar Abstract - Anomaly detection in dynamic cybersecurity networks has been a promising problem that has been addressed using Graph Neural Networks (GNNs). Today’s network topologies are too difficult to handle for traditional methods; the topologies are too dynamic and complex. The main contribution of this study is the evaluation of three GNN models, Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and RepographGAN, in terms of effectiveness to detect anomalies in dynamic network environments. Conventional anomaly detection techniques such as logistic regression, support vectors machines (SVM) and decision trees are compared against the models. The results demonstrate that RegraphGAN is superior to the other models in terms of accuracy, precision, recall, F1 score, and AUC-ROC, and is thus very effective at identifying anomalies. However, as computing resources are required for it, a compromise between performance and computing resources is found. Despite the lower accuracy of GCN and GAT, these provide more computationally efficient solutions that are appropriate for real time deployment constraints in such resource constrained environments. The findings provide a basis for future research that can optimize scalability and computational efficiency for large scale applications and in the context suggest the use of GNNs for improving cybersecurity systems.
Authors - Aaqib Hakeem, Akshay V, Parthav Mathu, Kotnada Yogesh, Gokul Kannan Sadasivam Abstract - Passwords remain one of the most widely deployed authentication mechanisms despite well-documented vulnerabilities to guessing attacks. Recent deep learning approaches, including Password Guessing using Temporal Convolutional Networks (PGTCN), have demonstrated that sequence modeling can effectively capture structural regularities in leaked password corpora. However, practical performance often depends not only on model architecture but also on training stability, batching strategy, and decoding configuration. In this work, we investigate a partition-aware training and generation pipeline built around a single Temporal Convolutional Network (TCN). Rather than introducing additional architectural complexity, the proposed framework emphasizes standardized preprocessing, balanced data partitioning for stable batching, optimized training procedures, and large-batch probabilistic decoding. A lightweight buffering layer is incorporated to decouple generation from evaluation and improve throughput without requiring distributed training infrastructure. Experiments on multiple real-world leaked password datasets show consistent, though modest, improvements in match rate compared to the PGTCN baseline under same-site evaluation. The results suggest that careful optimization and pipeline-level design can yield measurable gains in candidate ordering while maintaining reproducibility and implementation simplicity.
Authors - Sushant Maji, Sachin B. Jadhav Abstract - The offline signature validation by means of hand written signature is also a significant consideration in the financial, legal and ad- ministrative authentication systems. However, this is particularly challenging because of the inaccessibility of dynamic data of handwriting such as pen-pressure and stroke-velocity, and small training samples. The paper describes a modified version of Siamese-Transformer model called SigNeura, which is also improved with Synthetic Pen Pressure Map Generation to refine the accuracy of the verification in the few-shot learning. The adaptive thresholding, and utilization of the stroke-width estimation is applied to obtain synthetic pressure maps and fill in the dynamic information of the synthetic grayscale signatures with the static grayscale signatures. The Siamese network is optimized on discriminative embeddings and Transformer encoders are optimized on triplet long range contextual dependencies. The analysis conducted on benchmarking data using experiments demonstrates that SigNeura is a significantly superior approach than conventional CNN and Siamese-based approaches with a high level of accuracy and resistance to skilled forgeries.
Authors - Siddharth Joshi, Deepti Kiran, Dev Kumar Yadav, Harshit Sinha, Abhishek Kukreti Abstract - Artificial Intelligence (AI), as a technology, has the potential to change the manner in which organizations are run in the world. However, small and medium-sized enterprises (SMEs) in the Philippines have unique limitations in the use of AI in running the business. The study aims to explore the perceptions of SME managers in the Philippines on the use of AI, with particular reference to the limitations and facilitators in the use of the technology in the business environment. In this study, the researcher interviewed five SME managers from different sectors, including retail, manufacturing, and service sectors. The researcher used thematic analysis to identify the commonalities in the decisions made by the SME managers on the use of AI in the business environment. The study revealed the perceptions of the SME managers on the use of AI in the business environment in the Philippines, with the limitations and facilitators in the use of the technology in the business environment. The study provides practical insights that can guide strategies aimed at strengthening AI readiness and responsible adoption among SMEs in the Philippines.
Authors - Ambrish Kumar Sharma, Swati Namdev Abstract - The volume of data is growing gradually in all around by various sec-tors like e-commerce, stock market, medical, banking, education, social networks (Facebook, Twitter, WhatsApp) and also because of the utilization of the internet and mobile apps. Privacy and security have always been important issues with big datasets. Big datasets may be a collection of facts that has huge and multiplex structure like sensors, emails, weblogs and images. Sensitive information about individuals, which is usually evident or hidden in data, is susceptible to various privacy attacks and high risks of privacy disclosure. Constructing a secure and reliable environment for big dataset requires a distinction between existing approaches so that we can develop a unique solution in future for this that maximizes data privacy. This paper offers insights into the overview of big datasets, big dataset privacy problems and various privacy preservation techniques with comparative study used in big datasets.
Authors - Chaitra Sai Chakravarthi Ganapaneni, Rishik Reddy Cheruku, Venkata Karthik Chamarthi, Venkata Sasidhar Kommu, Malathi P Abstract - Academic websites function as institutional interfaces connecting universi-ties with multiple stakeholder groups. Many institutions face challenges in developing web presences that address usability, accessibility, and stakeholder needs simultaneously. Existing frameworks address isolated dimensions without providing integrated guidance. This research proposes a conceptual design framework for academic websites that integrates Web Con-tent Accessibility Guidelines (WCAG) 2.1 Level AA standards with Nor-man's design principles. The framework consists of four core segments (In-terface Design, Content Accessibility, Technical Performance, User Experience) and four modular add-ons categories (Career and Job Opportunities, Student Projects Showcase, Alumni Community, Industry Collaboration). Framework validation employed dual evaluation methods to ensure both conceptual soundness and stakeholder relevance. Expert judgment assessment (n=5) achieved complete agreement on conceptual soundness. Quantitative user assessment (n=450) across six stakeholder groups showed that framework components achieved good performance levels (mean scores 3.58 to 3.70) and add-ons features received high priority classifications (mean scores 3.62 to 3.80). The framework contributes systematic integration of accessibility standards with design principles and provides guidance for institutions developing academic websites.
Authors - Amulya Saxena, Pratibha Joshi, Adwitiya Sinha Abstract - Global food security and hunger mitigation is one of the major challenges ahead of us. The global population specifically from underdeveloped countries are quite vulnerable to climate change and its impact in abnormal weather conditions and related bad crop leading to food shortages. In today’s globalised world, where a disruption in food supply chain has its own impact on potentially everyone in the planet is a mounting challenge to surpass. The advent of Artificial Intelligence, specifically Computer Vision techniques prove to be extremely helpful in identifying the data pattern of the images of the cultivated land, its anomalies and is insightful in giving the challenges of farming such as affect of bad weather, bad crop prediction, crop distribution etc. The availability of high-quality geospatial data from the satellites such as Sentinel 1/2, Landsat is extremely helpful for advanced ML techniques to provide timely predictions so that a corrective action can be taken in time. This study focuses on an AI-driven approach that predicts land where Rice will be produced vs. no crop land using satellite optical data and its variates, radar logs, weather data and location information.
Authors - Arin Bansal, Pranshu CBS Negi Abstract - The research provides a description of WaveTrust, which is a trust-conscious and energy-efficient routing protocol that is applied to Underwater Wireless Sensor Networks (UWSNs) based on reinforced Q-learning and trust assessment. Neutral trust and network deployment initiate the protocol. During the process of routing data in real time, monitoring of the behavior by the nodes is required with respect to four metrics namely Packet Forwarding Ratio, Energy Behavior Consistency, Latency Observance and Link Quality Indicator. The calculation of the trust is performed according to the direct and indirect observations and makes it possible to determine malicious nodes. Q-learning routing strategy The routing strategy uses weighted rewards according to energy, trust and latency in updating paths such that it favors nodes with high-trust and high-Q-value. The nodes dynamically revise the trust and Q-values about the received feedback during transmission of data. The sink node keeps on broadcasting the global updates of the updated trust thresholds and routing updates. The simulation outcomes have indicated that WaveTrust is better than T-AODV, FuzzyTrust on the basis of packet delivery ratio, detection accuracy, energy consumption, routing overhead and an apparent strength on the capability to work in dynamic and resource limited underwater setting. This creates the impression that WaveTrust is quite flexible protocol and has the capability of providing secure and energy efficient routing in UWSNs.
Authors - Harita Venkatesan Abstract - Fusion-based multimodal models typically assume full modality availability at inference, an assumption that often fails in real-world settings. When a modality is missing, common strategies such as zerovector masking or unimodal fallback can lead to unstable predictions. We propose CORE, an embedding-level framework that completes multimodal representations by integrating original and cross-modally reconstructed embeddings in a fusion-consistent manner prior to fusion. CORE employs lightweight bidirectional cross-modal imagination networks with a cycle-consistency constraint to preserve shared semantic structure across modalities. The model is trained with stochastic modality dropout, enabling unified inference under complete and incomplete modality configurations. Experiments on a multimodal MRI–text classification task for lumbar spine analysis demonstrate that CORE yields more stable predictions than zero-vector masking under severe modality absence, while maintaining comparable performance when all modalities are present.
Authors - Latha N. R., Pallavi G B, Shyamala G., Abubakar Mohammedshafee Matte, Aditya Dinesh Netrakar, Akshara Singa, Akshata Hosmani Abstract - Tourism has become a strategic pillar in China’s transition toward a service-oriented economy, the world cultural heritage sites play an important role in promoting cultural–tourism integration in both China and global. The Dazu Rock Carvings is located in Chongqing, well known by their unique synthesis of Buddhist, and Taoist ideas and their wonderful stone-carving artistry. Recently, the Dazu site received growing number in tourist arrivals and tourism-related revenue due to the regional rapid development as well as the strategic support; however, compared with other outstanding heritage destinations such as the Mogao Grottoes, the reception capacity, product diversity, brand influence, and market performance of Dazu still remain relatively weak. This study adopts a mixed qualitative–quantitative case study design. Data are collected from official tourism statistics and cultural heritage management reports published by national and local authorities in between 2018-2024. Descriptive analysis is used to explore the trends in tourist arrivals, tourism revenue, and related industrial effects. Based on the findings, the study identifies key dimensisons on sustainable development and proposes a marketing path centered on cultural IP empowerment, industrial ecosystem construction, and digital technology-driven innovation, offering practical guidance for similar heritage destinations.
Authors - Deepa V, Atul Anilkumar, Sheena Susan Andrews Abstract - Organizations are rapidly embedding artificial intelligence (AI), including generative AI, into core business functions, but making AI sustainable across environmental, social, and economic dimensions is still challenging, especially when data governance is weak. Public estimates suggest data centres consumed roughly 415 TWh of electricity in 2024 and may rise toward ~945 TWh by 2030 under a base-case trajectory, while reported AI-related incidents reached a new high in 2024. In parallel, industry signals point to fast enterprise adoption of GenAI and ongoing leakage of sensitive information through tools that are not properly governed. Taken together, these patterns increase sustainability risks that are often data-mediated in practiceshaped by data quality and representativeness, provenance and documentation, access control, privacy protections, and end-to-end lifecycle management. Although data governance is widely seen as “foundational” to responsible AI, the concrete mechanisms linking governance capabilities to sustainable AI outcomes, and the ways to measure them, remain dispersed across data management, AI governance, and sustainability research. This paper consolidates peer-reviewed research, public standards, and open industry evidence to position data governance as an operational, measurable capability for Sustainable AI, one that converts sustainability goals into decision rights, lifecycle controls, and auditable outcomes. It contributes: (i) a capability-based taxonomy of data governance tailored to AI lifecycles; (ii) six evidence-grounded impact pathways showing how governance mechanisms influence outcomes (quality and fairness; documentation and auditability; privacy and security; interoperability and reuse; lifecycle stewardship; and sustainability instrumentation); and (iii) the Sustainable AI Data Governance Impact Model (SAI-DGIM), accompanied by testable hypotheses (H1–H8) and a KPI-oriented measurement framework that can be validated using survey constructs, system telemetry, and governance artifacts. For practitioners, the model offers a practical roadmap to embed governance controls directly into AI delivery workflows and treat sustainability metrics as release criteria, not just retrospective reporting. For researchers, it provides aligned constructs, hypotheses, and measurement guidance to rigorously assess how organizational data governance shapes Sustainable AI outcomes at scale.
Authors - Nhat Ho Minh, Long Le Pham Tien, Kien Nguyen Trung, An Pham Nam, Trong Nhan Phan Abstract - The fast increase in the number of unstructured digital documents in academic, industrial, and personal fields has generated an urgent requirement to have intelligent systems to read, arrange and structure document automatically. Traditional document organization methods have traditionally been heavily based on either manual intervention or rule-based methods, neither scalable nor efficient nor error free. The current paper is a multimodal AI architecture to assist document under-understanding and structuring that uses large language models (LLMs) and vision language models to handle heterogeneous document types. The suggested framework does semantic metadata extraction, classification of documents as well as structural organization of textual and visual documents. It uses a modular three-layer design, including an AI processing layer, service oriented backend, and cross platform user interfaces. The system is also developed to support secure functioning in the offline mode, which guarantees the privacy of data and the low-latency processing. The effectiveness of the pro-posed frame-work has been proved through experimental assessment, as it will be seen that classifying documents and categorizing images are very precise. The findings show that multimodal AI is remarkably better in document understanding and automation than traditional systems.
Authors - S M Mazharul Hoque Chowdhury, Ruth West, Stephanie Ludi Abstract - The prediction of liver disease through clinical data analysis faces difficulties because current machine learning methods fail to handle class imbalance and produce incorrect probability assessments. The existing supervised and ensemble methods use fixed decision thresholds together with heuristic weighting methods which results in biased predictions that compromise their ability to achieve balanced performance. The research introduces CAL-WE++ which serves as a Calibration- Weighted Ensemble system that uses an MCC-Optimized Threshold to forecast liver disease. The system employs five-fold stratified cross-validation without data leakage to produce out-of-fold probability results. The model weights are determined by evaluating both the model's ability to distinguish between outcomes (measured through ROC-AUC) and its accuracy in predicting probabilities (assessed through Expected Calibration Error ECE). The Matthews Correlation Coefficient (MCC) serves as the optimization method to determine the final classification threshold which helps to solve class imbalance problems. The Indian Liver Patient Dataset (583 records; 416 diseased, 167 non-diseased) experiments show that CAL-WE++ achieves a mean cross-validation MCC of 0.3474 and a test MCC of 0.4487 which exceeds the performance of baseline classifiers. The model achieves a ROC-AUC score of 0.8140 and a PR-AUC score of 0.9272 while maintaining a low ECE value of 0.0774 which demonstrates strong ability to distinguish between different outcomes and accurate probability assessments. The CAL-WE++ framework offers medical professionals a decision-making system that maintains balance between multiple criteria while delivering dependable outcomes for medical datasets with unequal class distributions.
Authors - Nidhi Pruthi, Rajiv Singh, Swati Nigam Abstract - Automatic Speech Recognition (ASR) systems have achieved remarkable progress through deep learning and Transformer-based architectures, demonstrating near-human accuracy on clean audio. However, their performance degrades significantly under challenging conditions and specialized domains. This comprehensive study evaluates leading commercial ASR APIs—Google Cloud Speech-to-Text, Microsoft Azure Speech Service, AssemblyAI, Deepgram, OpenAI Whisper, Speechmatics, and others—across multiple dimensions: general speech recognition, low-quality forensic-like audio, domain-specific mathematical notation, and personalized speaker adaptation. Results demonstrate 100% accuracy on clean audio for leading systems (Deepgram, Speechmatics, Webkit SpeechRecognition), but dramatic performance degradation to 10− 81% word error rates on forensic-like audio. Analysis of domain-specific challenges reveals that none of the tested commercial ASR systems natively support direct transcription of mathematical symbols and Greek letters into structured symbolic output (e.g., LaTeX). The study identifies critical limitations in robustness, modularity, and domain adaptation, while highlighting promising customization mechanisms including custom vocabularies, language models, and post-processing integration. Performance improvements through speaker personalization ranged from 3% for natural voices to 10% for synthetic voices. Despite notable advances in end-to-end and Transformer-based approaches, ASR systems remain unsuitable for forensic applications and specialized domains without substantial customization and post-processing. Future research must address low-resource performance, linguistic diversity, robustness in extreme noise, and the integration of Large Language Models for semantic understanding. This paper synthesizes recent advances and critical gaps, providing a roadmap for advancing ASR technology in specialized and challenging acoustic environments.
Authors - G Naga sree suma, A. Kamala kumari Abstract - The existence of a growing social media has created complex cyber systems in which vast quantities of interactions constitute substantial issues regarding misinformation, privacy invasion, deception of identities, and destructive behavioural tendencies. The regularity of involvement in this type of big systems requires sophisticated systems that are able to judge the motive of the user, content validity and suspicious activities within real time. Overall interest will be to develop a universal trust calculation system that will be more secure and effective in ensuring privacy and increasing the accuracy of suspicious or malicious users in social sites. The proposed Multi-Layer Federated Trust Framework algorithm is a combination of peer-based user reputation scoring, feature-based content authenticity detection, federated trust indicators aggregation, and anomaly detection with the help of behavioural anomalies. These approaches cooperate with secure aggregation and decentralized learning in removing the uncoded information exposure and enable the computation of trust at scale. The proposed algorithm is experimentally confirmed, and the obtained results are 95.2, 94.1, 93.5, and 93.8, corresponding to a minimum latency of 65 ms and a privacy preservation score of 0.98. The general results indicate a viable and holistic response that adds to secure interactions, blocks malicious acts and encourages trust in the actual social media settings.
Authors - Viet Anh DUONG, Hai Phong BUI, Van Son NGUYEN Abstract - This article presents a neuro-symbolic modelling approach grounded in qualitative data collected from 25 sports clubs located in R´eunion. The study develops a methodological chain linking structured semantic extraction, ontological formalisation in OWL, and agent-based simulation implemented in NetLogo. Rather than modifying structural scenarios across experiments, the design introduces two contrasting organisational sensitivity profiles derived from field observations: a damped profile and a high-gain profile. The structural configurations remain identical between profiles; only the coefficients of the commitment update function vary, ensuring strict experimental comparability. Results indicate that identical structural conditions produce differentiated collective trajectories depending on internal sensitivity parameters. In highgain configurations, dominance-weighted interactions increase variance and generate polarised engagement distributions, whereas damped configurations maintain relative stability across scenarios. These findings suggest that modelling organisational sensitivity parameters is critical for understanding the robustness of digitally mediated collaboration in volunteer-based organisations.
Authors - Allezandra A. Adriano, Joshua Basile Mhar L. Austria, Benjamin L. Carnate, Xamantha Angelique E. Ruiz, Wilben Christie R. Pagtaconan Abstract - Plant diseases due to various pathogens can cause significant loss in yield and productivity. The classification of these diseases is necessary to prevent damage to crops. For classification, a large number of Machine learning and deep learning algorithms have been developed. In this research, five classes of plant leaves and a further fifteen different diseases of these plants (three subcategories for each class) are used for classification. In the proposed methodology, we have used three pre-trained models, namely, ResNet 152v2, InceptionResNetV2, and mGoogleNet, and a custom-built model. This research has used three basic steps to classify the disease categories, namely image preprocessing, image segmentation, and feature extraction. Fifteen thousand plant leaf images have been collect-ed from the online available Kaggle PlantVillage dataset. This data is present in a JPG file format. After the class label distribution of the dataset, the dataset is first trained and then tested on these deep learning models. The label distribution is done in such a way that each of these fifteen categories has 80% training images and 20% validation images. We have used different performance measures, namely, precision, recall, F1-score, and support, to calculate the accuracy. The obtained validation accuracy of ResNet152V2 is 97%, GoogleNet is 96%, Incep-tionResNetV2 is 93%, and a custom-built model is 99%. These results show that the custom-built model has attained the highest accuracy. These models can also be used to build a recommender system framework for the recommendation of fertilizers in the future.
Authors - E. Praveen Kumar, Shankar Lingam. M Abstract - Quantum computers are a major threat to the existing encryption mechanisms. In terms of security, the traditional encryption algorithm depends on complex problems like discrete logarithm as well as factorization of integer. Shor’s algorithm is believed to break the current Public Key Encryption algorithms such as Advanced Encryption Standard (AES). Therefore, several research are carried out in the area of PQC (Post Quantum Cryptography). PQC are based on very complex mathematical problems like Learning with error (LWE) which are robust against quantum computers. The National Institute of Standard and Technology (NIST) has initiated several rounds of standardization process for PQC algorithms, among which NTRU, SABER, CRYSTAL-KYBER are the leading candidates. CRYSTALS-KYBER (Kyber) is the first chosen PQC for standardization. This works explores the recent development in Crystals Kyber implementation and its optimization. Researchers can approach for new research challenges and target for improvement thereby increasing efficiency.
Authors - An Doan Van, Dong Nguyen Doan, Quynh Tran Duc, Thuan Nguyen Quang, Bao Phan Gia, Hieu Doan Minh, Van Khanh Doan Abstract - Performance bottlenecks in Python programs arise from a wide variety of sources, and no single technique reliably catches them all. This paper proposes CodeForge, a sequential three-stage optimization system that unites deterministic Abstract Syntax Tree (AST) inspection, CodeBERT embedding-based retrieval, and Gemini LLM-driven rewriting into one end-to-end pipeline. A rule engine in the first stage pinpoints well-known structural problems; a neural similarity search in the second stage captures harder-to-spot variants; and a Gemini LLM in the third stage performs the actual rewrite, guided by a structured hint block assembled from both preceding stages. Before any result is returned, a configurable validator rejects changes that fail minimum speedup, memory, or complexity criteria. Alongside each accepted optimization, a composite confidence score and a plain-language rationale are produced. Tests on six representative Python patterns show that hint-guided LLM prompting raises successful detection from four to six out of six cases compared with unguided prompting, while the validation layer blocks every harmful transformation in the test suite. The system is available as a FastAPI REST service accepting both raw source text and uploaded .py files.
Authors - Jyotika R. Yadav, Arpit A. Jain Abstract - Internet of Things (IoT) with AI techniques help healthcare industry for patient monitoring and diagnosis. Wearable devices integrated with the Internet of Medical Things (IoMT) have transformed modern healthcare by enabling continuous, real-time monitoring of physiological parameters. The rapid evolution of Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), edge computing, and federated learning has further enhanced the reliability, privacy, and intelligence of such systems. Wearable devices like smart watch or smart sensors help doctors to monitor patient’s daily activities. However, these devices generate huge amount of data on day-to-day basis which makes analysis, monitoring, and diagnosis challenging. Machine Learning or Deep Learning models used for handling such large healthcare data. This survey consolidates and critically reviews recent research works to provide a holistic understanding of the current state-of-the-art in wearable AI-enabled healthcare. A detailed comparative analysis is provided to highlight similarities, differences, strengths, and limitations of existing approaches. Finally, key challenges and future research directions are discussed to guide the development of secure, scalable, and intelligent wearable healthcare solutions.
Authors - Shweta H. Jambukia, Pooja R. Makawana, Prapti G. Trivedi Abstract - This paper presents a case study on a High Voltage Jet (HVJ) electric boiler, focusing on current unbalance (CU) risk identification and mitigation us ing a combined data-analytics and Failure Mode and Effects Analysis (FMEA) framework. Power-quality assessment follows IEC 61000-4-30 for voltage un balance (VU), while CU interpretation refers to NEMA MG-1 and IEEE recom mendations. The proposed workflow integrates (i) instrument classification (Class A for voltage), (ii) time synchronization across logger/PLC/power-quality analyzer to avoid timestamp drift, and (iii) historian-based data pre-processing (outlier cleaning, scaling, and missing-data handling) prior to statistical analysis. Results show an average CU of 6.85% with a standard deviation of 0.48% and a maximum of 15.92%, indicating operational periods exceeding common industry limits. FMEA highlights electrode aging/damage, loose/corroded cable connec tions, and supply power-quality issues as the dominant contributors. Recom mended actions include online phase-current monitoring, improved water-chem istry and blowdown management, and control optimization of the VFD-driven boiler circulation pump (BCP).
Authors - Priyanka K, Vinay R K, Vansh Jain, Vinit Kulkarni Abstract - This study examines the influence of both demographic and natural factors on climate change risk perception in New Zealand. Using data from a nationally representative survey, the analysis applies exploratory factor analysis to construct a composite measure of risk perception, followed by correlation and regression modeling to evaluate the relative contribution of environmental exposure and human characteristics. The findings indicate that while natural factors such as temperature anomalies and extreme weather exposure significantly shape perceived risk, demographic variables including prior disaster experience, trust in scientific institutions, and media exposure exert a stronger overall influence. These results underscore the importance of incorporating social and behavioral dimensions into climate risk assessments and policy development to enhance public engagement and adaptive capacity.
Authors - Piyush Tewari, Rohit, Rujal Agarwal, Yanshi Sharma Abstract - Current Network Intrusion Detection Systems (NIDS) typically analyze traffic as independent tabular records, largely ignoring the relational and temporal dependencies inherent in real-world communications. This limitation is particularly critical for detecting botnets, which rely on coordinated, evolving interactions rather than isolated malicious packets. To address this, we propose a topology-aware framework that models network traffic as a sequence of dynamic communication graphs. Using the CICIDS2017 dataset, we construct sliding-window snapshots where IP addresses form nodes and flows form edges. A spatiotemporal graph neural network is employed to learn evolving structural representations, integrated with a novel learnable gated fusion mechanism that adaptively balances graph-based context with conventional flowlevel statistics. The model is optimized using a hybrid objective combining class-weighted cross-entropy and center loss to mitigate data imbalance. Experimental results demonstrate that the framework achieves improved performance on structural attacks, with botnet detection reaching an AUC of 0.999. Furthermore, the learned gating values reveal a strong model preference for topological features over static statistics, empirically validating that structural context is superior for identifying coordinated threats. These findings underscore the effectiveness of spatiotemporal modeling in enhancing the robustness and interpretability of next-generation NIDS.
Authors - Bikkam Hemanth Reddy, Allu Eswar Kaushik, Tiyyagura Mohit Reddy, Kuruboor Venkatesha Deepak, Bharathi D Abstract - Cloud cover generally limits the applicability of optical remote sensing images for tasks such as agriculture monitoring and disaster relief. Cloud removal is an inherently difficult problem because of the lack of spatial structures and spectral information. To effectively remove cloud contamination from SAR and optical images, we propose a speckle-aware global cross-attention network. The proposed SAR-optical cloud removal network architecture consists of a dual encoder with a global cross-attention mechanism that allows for effective cross-modal interactions. Additionally, a refining module and symmetric decoders improve the accuracy of the reconstructed image. Furthermore, we propose a speckle-aware gating mechanism to perform speckle filter adaptation. The experimental results affirm that our proposed network outperformed the baseline by increasing Peak Signal-to-Noise Ratio(PSNR) by +0.86 dB, Structural Similarity Index Measure(SSIM) by +0.142, and reducing the spectral distortion of the image. Additionally, we noticed a decrease in the Root Mean Square Error(RMSE) and Spectral Angle Mapper(SAM) values. This infers that selective SAR-Optical fusion with an adaptive noise-aware gating mechanism improves the accuracy of cloud-free optical images and optical remote sensing images.
Authors - Azamat Kasimov, Kholida Bekpolatovna Saidrasulova, Zebo Abduxalilovna Shomirova, Shoh-Jakhon Khamdаmov, Safiya Karimova, Dilshoda Akramova, Doniyor Niyozmetov Abstract - Inconsistent medication intake is a major issue, especially for elderly individuals and patients with memory problems [1]. The MediMitra: Voice Enabled Medicine Alert System seeks to tackle this problem by offering an automated, low-cost and user-friendly medication reminder solution. The system combines Raspberry Pi with Optical Character Recognition (OCR) technology to pull medicine names, dosage details and intake times directly from scanned prescriptions. This reduces manual input and user reliance. The information is stored in a central database and connected to a scheduler that sends timely voice alerts through smart speakers or Bluetooth devices. This ensures users receive reliable and easy-to-access reminders. The OCR module is designed for high accuracy in processing printed prescription images by using image preprocessing techniques like noise reduction and thresholding, which helps in effectively extracting key medication details [2]. The system focuses on accessibility, affordability and ease of use in home or clinical settings. Overall, MediMitra provides a useful technological solution to improve medication adherence and supports independent living. It also has potential for future integration with health-monitoring systems.
Authors - Gargi P. Lad, Abhijeet R. Raipurkar Abstract - Remote sensing imagery plays an important role in applications such as environmental monitoring, disaster management, urban planning and agricultural analysis. However, the spatial resolution of such imagery is often limited by sensor constraints, revisit frequency and acquisition cost. To address this challenge, this paper presents RCAN-RS, an enhanced Residual Channel Attention Network for remote sensing image super-resolution. The proposed model extends the RCAN framework through three targeted modifications: a dual-pooling channel attention mechanism, a spectral attention module and an edge enhancement module. These components are designed to improve detail reconstruction while preserving inter-channel consistency and sharp structural boundaries in remote sensing imagery. The model was trained and evaluated on the DOTA dataset un-der a 2× super-resolution setting from 256 × 256 to 512 × 512 pixels. Quantitative evaluation using both conventional image-quality metrics and remote-sensing-oriented measures shows that RCAN-RS achieves a mean PSNR of 34.42 dB, SSIM of 0.9398, Edge Preservation Index of 0.9524, ERGAS of 6.68 and UQI of 0.9846 on the test set. These results demonstrate the effectiveness of integrating attention-guided and edge-aware mechanisms for remote sensing image super-resolution.
Authors - A. Harshavardhan, Krishna Anirudh Gunturi, Rikhil Rao Janagama, Navaneeth Reddy Nalla, N V Abhijeet Mukund, Avire Kaushik Abstract - The Age classification is a critical task in computer vision with widespread applications in fields such as healthcare, security, and autonomous systems. This project presents a deep learning approach for multi-class image classification using feature extraction with the EfficientNetB3 architecture. The model was trained on a dataset that has images labeled according to different age groups, where the images were preprocessed, normalized, and sized to a steady resolution appropriate for EfficientNetB3 input. Data handling was simplified using pandas and ImageDataGenerator, ensuring proper splitting into training, validation, and test sets, with suitable shuffling and augmentation strategies applied to improve generalization. This model influences EfficientNetB3 as a feature extractor, combined with a custom classification head containing Batch Normalization, L1/L2 regularization with Dense layers, Dropout, and a SoftMax output layer. This model was trained using the Adamax optimizer and categorical cross-entropy loss, with performance monitored through accuracy and loss metrics over multiple epochs. Training history was seen to identify the epochs corresponding to the best validation performance. Assessment of the model on the test data-set includes loss, accuracy, confusion matrix, and a comprehensive classification report with precision, recall, and F1-score for each set. The results demonstrate that transfer learning, combined with careful preprocessing and regularization, can achieve robust performance in image classification tasks. This pipeline provides a producible and scalable framework for multi-class image classification and can be extended to other datasets and real-world applications requiring automatic image recognition.
Authors - M Purushotham, Ch Sandeep Kumar, G Jayendra Kumar, Tummalapalli Venkata Jayanth, Akula Manoj Kumar, Purna Saradhi Chinthapalli. Abstract - Wireless Body Area Network (WBAN) is an innovative network system, which consists of numerous wearable or implantable devices that monitors and transmits the physio-logical data. Designing a wearable patch antenna for WBAN is a challenging, because human body is a lossy medium which can absorb and scatter electromagnetic waves, thus leads to degrade of antenna performance. In this paper, the proposed antenna is a wearable 6G microstrip patch antenna, which is very flexible and light with a flat surface, unlike traditional counterparts and these can be placed directly on a human body and are comfortable to wear for long periods. The antenna is designed, simulated, and analyzed using Computer Simulated Technology (CST) studio suite and the design consists of microstrip patch, substrate, feedline, and ground plane. The simulation parameters such as S-Parameter, Voltage Standing Wave Ration (VSWR) and far field radiation are calculated. The results of proposed wearable 6G patch antennas with varying THz frequencies shows, it is very appropriate for WBAN at 2.56THz and 4THz.
Authors - Arathi B K, Rishikeshwar Kumaresan, S Kanagalakshmi, Sathish Kumar S Abstract - Single magnetic resonance imaging (MRI) super‑resolution remains challenging due to the substantial heterogeneity between low‑ and high‑resolution (LR-SR) inputs. This paper presents an ablation analysis of three convolutional neural‑network architectures, namely Conv2D, fully convolutional network (FCN), and U‑Net, combined with four activation functions (Linear, Tanh, ReLU, Leaky ReLU). LR inputs are generated through mean- and max‑pooling with a 6×6 scale factor, enabling evaluation under both smooth and heterogeneous degradation conditions. The results show that U‑Net achieves the highest reconstruction accuracy, reducing MAE by 8% relative to FCN and 10% relative to Conv2D. ReLU-based activations provide stable convergence for shallow models, while the U-Net remains robust across all activation functions. These findings emphasise the importance of selecting appropriate architectures and activation functions to achieve robust and high‑quality MRI super‑resolution in real‑world applications.
Authors - Susmita Adhikary, Aswin Babu VP, Dinesh U, Harish M, Karthik M, Gokul A Abstract - Urban metro rail systems are the key to urban sustainable mobility; however, in spite of the developed technologies, projects regularly experience delays and contractual disputes. These perceived challenges are highly attributed by prior scholarship to matters of the execution phase and restricted illumination is given on the institutional circumstances that form system performance in ICT intensive infrastructure. This paper examines procurement strategy as a govern ance tool that affects the results of digital system integration and sustainability in Indian metro rail projects. Based on statutory performance audit reports and com parative case studies, the analysis indicates that fragmented procurement arrange ments fragment the integration functions to several contracts, leading to coordi nation failure, delayed commissioning, and high claims. Instead, the more coor dinated procurement models with consolidated interdependent systems and de fined integration roles have a better coordination structure and predictable deliv ery. The results indicate that the problem of metro project integration is more of an institutional than a technological problem. This research study adds to the body of knowledge on infrastructure governance by noting the design of procure ment to be one of the determinatives in the realization of effective and sustainable urban transit outcomes.
Authors - Mousami Turuk, Anirudha Page, Tina Chugera, Gauri Desale, Mrunmayee Kulkarni Abstract - Unstructured vehicle traffic (i.e. those containing multiple users such as automobile drivers, pedestrians, cyclists, and even animals) creates a significant challenge for road safety. This work presents the development of a real-time road risk assessment (RRA) system for analyzing dashcam video that combines several computer vision techniques: object detection, semantic segmentation, multi-object tracking, and alert classification, into a unified, integrated processing pipeline. Object detection and multi-object tracking are accomplished using the YOLOv8m and ByteTrack with Kalman Filter algorithms. Additionally, semantic segmentation of the road scene is achieved using a SegFormer-B2. Finally, a segmentation-assisted fusion filter and perspective-aware danger zone are applied (to define each point in the field of view as belonging to a zone with certain levels of risk). The Road Intrusion Risk Score (RIRS) is a composite score that quantifies the severity of intrusion accumulated over time, and provides graduated alert levels. Testing of the system on COCO val2017 and four dashcam videos produced reliable object detections with significantly fewer false positives and very close to real-time performance, demonstrating the potential of the system to improve driver assistance systems in unstructured road environments.
Authors - Mousami Turuk, Harshwardhan Sawant, Jatin Bhate, Yash Gosavi, Sakshi Hosamani Abstract - The global tourism industry has strongly recovered in the post-pandemic era, with border tourism becoming an important platform for regional economic cooperation and cultural exchange. Nong Khai, Thailand, with its geographic advantages and its role as a cross-border hub, has the potential to transform from a transit point into a cultural hub. However, its tourism destination image has been constrained by its perception as a transit point. This study, based on tourism destination image theory and the cognitive-affective frame-work, integrates online review text analysis and semi-structured interviews to analyze the cognitive, emotional, and overall dimensions of Nong Khai's tour-ism image. The results show that Nong Khai’s tourism image reflects a triad of culture, ecology, and cross-border relations. Buddhist culture and the Mekong River are key attractions, but visitors generally have short stays and low spending. 52% of cross-border tourists view it as a transit point to Vientiane. Positive feedback accounts for 65.17%, largely driven by cultural experiences and local service friendliness; negative feedback accounts for 8.86%, focusing on inefficient transportation, poor facility maintenance, and weak cultural symbolism. Based on these findings, this paper suggests four optimization strategies: enhancing the Buddhist cultural experience, improving service systems, strengthening digital marketing, and promoting cross-border collaboration. This study provides empirical evidence for Nong Khai’s efforts to overcome the transit point challenge and offers a model for ASEAN border cities to build differentiated tourism images and sustainable development paths.
Authors - CH VENKATA NARAYANA, G VAMSI KRISHNA, K SIDDARTHA, G MADHU Abstract - Software-Defined Networking (SDN) offers central control and management of traffic flow, which is currently facing increasing security threats from ever-changing and voluminous attacks. The traditional signature-based intrusion detection system is not capable of identifying unknown attacks in real time. The proposed paper suggests a hybrid model for intrusion detection based on CNN and Transformer architectures for Software-Defined Networking. The proposed model will be tested and validated on a real-time testbed based on the Mininet network simulator, Open vSwitch, and Ryu Controller. The proposed model will be trained on the InSDN dataset and will utilize the SHAP technique for model interpretation and will be capable of automatic mitigation of attacks by blocking malicious traffic.
Authors - Abhishek Sawant, Manas Bhansali, Naman Shah, Mandar Kakade Abstract - The integration of Traditional Medicine (TM) into global healthcare standards faces challenges due to the gap between clinician-entered free text and standardized terminologies like ICD-11. In India, AYUSH providers must document diagnoses using local terms while also supporting dual coding across NAMASTE, ICD-11 Traditional Medicine Module 2 (TM2), and ICD-11 Biomedicine. However, most EMRs do not provide unified support for these coding systems. This paper proposes a human-centric, AI-Assisted Terminology Microservice that standardizes diagnosis entry and automates the mapping between these terminologies. The system has a hybrid architecture. A Spring Boot orchestration layer manages the terminology graph and the EMR-facing APIs. Meanwhile, a Python-based machine learning service handles semantic matching from free-text descriptions to concept codes. It uses TF-IDF features and a Linear Support Vector Machine(SVM) classifier that is trained on a Silver Standard Dataset of approximately 3,250 synthetic clinical descriptions covering 75 common health issues,morbidities, with conservative lexical augmentation applied during training to improve robustness. A safety-critical fallback mechanism was designed, which detects predictions with confidence below θ = 0.45 and directs out-ofdistribution inputs to manual search workflows. This ensures a human-in-the-loop model and makes it safe to use in clinical environments. The microservice provides APIs that are EMR-friendly and produce dual-coded FHIR format diagnosis resources. This setup ensures safety along with scalability and interoperability so that it can be deployed in diverse healthcare environment.
Authors - Atharva Sachan, Aryan Gupta, Aditya Varshney, Abhishek Sharma, Surendra Kumar Keshari, Veepin Kumar Abstract - Mobile Health (mHealth) has been regarded as a potentially transform-ative element for enhancing health service delivery in low-income nations. The effective integration of technology relies on ongoing usage rather than just initial acceptance. While the body of literature on factors influencing continued mHealth use is expanding, post-adoption expectations are proposed as indicators of the success or failure of mHealth implementation. There is limited research on how community health workers' post-adoption expectations influence their inten-tions to persist in using mHealth in developing regions. Consequently, this study explores the effect of post-adoption expectations on satisfaction and ongoing us-age behaviour regarding mHealth among community health workers in Malawi, which represents a developing country context. The research introduces a frame-work that builds upon the expectation confirmation model and incorporates ele-ments from the updated information success model. A mixed-methods conver-gent design was utilised for the study. Data were collected through surveys and semi-structured interviews with community health workers who utilise Cstock. Cstock is an mHealth application that facilitates the ordering of medical supplies via text message. The findings generally support the notion that post-usage use-fulness, along with information quality, system quality, and service quality, pos-itively influences community health workers’ satisfaction and their intention to continue using the Cstock application. The results indicate that the ongoing usage behaviour of mHealth among community health workers is shaped not solely by behavioural expectation beliefs (i.e., post-usage usefulness) but also by objective expectation beliefs, including system quality, service quality, and information quality. Therefore, these findings provide valuable insights to policymakers, practitioners, mHealth developers, and other relevant parties regarding the post-user expectations essential for maintaining future mHealth solutions in develop-ing countries, particularly in Malawi.
Authors - Sanchit Prashant Joshi, Vedant Vipin Joshi, Aditya Arun Mangalekar, G.S.Mundada Abstract - Malware classification is essential in cyber-security. It en ables prevention of threats by identifying and accurately classifying ma licious software. It also helps in understanding attacker behavior, enhanc ing threat intelligence, and improving the overall effectiveness of security systems. It is increasingly critical as adversaries now employ obfuscation techniques to avoid detection. Traditional models such as Convolutional Neural Networks (CNN) often struggle with such obfuscated malware samples. In this paper, we propose MalViT, a Vision Transformer (ViT) based framework for robust malware classification using grayscale image representations of malware binaries. The ViT is fine-tuned on a prepro cessed Malimg dataset. To evaluate the robustness of the model, real world obfuscation techniques such as Encryption, Dead code insertion, Random masking and Junk Padding are simulated. ViT model is initially f ine-tuned on the clean samples and later on a combination of the clean and obfuscated samples. Both models are evaluated on the clean and obfuscated test sets to highlight the robustness of the model. The final model achieved a combined accuracy of 94.52 % on both the clean and obfuscated samples. The results demonstrate that MalViT maintains a competitive performance under obfuscation. This project highlights the potential of ViTs in building resilient malware classification systems and provides a foundation for future work in transformer based architecture for malware analysis.
Authors - Samiksha Ganesh Zagade, Arya Mahesh Parkar, Suman Madan Abstract - Advances in Artificial Intelligence, Machine Learning and Internet of Things technologies have enabled wearable devices to sense as well as process and respond to human behaviour in real time. While most wearable devices today are used for health and fitness tracking. Many people face communication challenges such as language barriers, difficulty understanding emotions or social cues, social anxiety and accessibility issues for individuals with hearing or speech impairments. Existing systems often collect data but fail to provide meaningful, real-time assistance during actual human interactions. This research paper presents a literature-based study on AI powered wearable devices designed to support and enhance human communication. The research papers are focusing on intelligent wearables that use multimodal sensors such as microphones, cameras and sensors. These systems apply AI techniques to interpret speech, gestures, facial expressions and emotional signals in real time. The wearable devices considered include everyday consumer-oriented systems such as smart eyewear that provides audio visual assistance and wrist worn wearables that offer haptic feedback. The key focus of this study is to examine how such devices can deliver subtle, real-time support through visual prompts, audio cues or vibrations to improve conversational awareness and user confidence. The expected outcome is to identify current capabilities, practical limitations and design considerations for developing human centric wearable technologies that move beyond passive tracking toward meaningful communication support.
Authors - Adnan Hasan, Ishaan Mishra, Jyotiska Bose, Jada Viswa Chaitanya Sai, Jai Kumar, Kaif Akhter, Ranjita Kumari Dash Abstract - In the present-day context, presentations and computer-based interac tion play a crucial role in various domains, particularly in education and business. Traditionally, users have to rely on physical devices such as mouses, keyboards, or laser. Although these devices meet the basic requirements, they still reveal many limitations regarding mobility, continuity, and dependence on battery life. To address these limitations, hand gesture-based presentation control systems have emerged as a promising solution due to their intuitive, natural, and engaging interaction style. This paper proposes a touchless system that enables users to control common desktop operations as well as presentations in a natural manner using hand gestures captured via a standard webcam. The proposed system lev erages OpenCV for real-time video acquisition and preprocessing, while Medi aPipe framework is employed for hand tracking and landmark extraction. From the experiments, our system can process in real-time with the accuracy of approx imately 92%. As a result, users can seamlessly control slides, use virtual mouse operations, annotate presentation content, and engage with the audience in a more interactive and natural way without physical contact.
Authors - Jyoti Chandel, Meenakshi Mittal Abstract - Internet of Things (IoT) devices are growing in domains because of their reliability and efficiency in monitoring, real-time detection and automated support. However, these IoT systems have also introduced security challenges. These devices are vulnerable to cyber threats, where attackers exploit weak points in the system to steal sensitive information. One of the attacks is the Distributed Denial of Service (DDoS) attack, which disrupts services by overwhelming systems and making them inaccessible to legitimate users. IoT devices are resource-constrained, so reducing feature dimensionality is essential to lower computational overhead and complexity. IoT devices generate data for detecting cyber-attacks, but sharing such data across organizations raises privacy concerns. To address these challenges, the proposed approach is designed in two phases. In the first phase, a hybrid feature selection technique using mutual information, permutation feature importance, and Greedy wrapper-based feature selection with cross-validation is applied to extract relevant features. In the second phase, Federated Learning (FL) is applied to train the model without sharing raw data among clients. Within the FL framework, Random Forest (RF) algorithm is utilized for training due to its robustness and classification capability. The proposed model is evaluated under two data distribution scenarios: mildly non-IID and strongly non-IID conditions. Experimental results demonstrate that the model achieved an accuracy of 99.69% in a mildly non-IID scenario and 98.36% under strongly non-IID conditions, highlighting the effectiveness and reliability of the proposed framework for secure IoT-based DDoS attack detection.
Authors - P.N. Deorukhakar, V.B. Waghmare, I.K. Mujawar, R.Y. Patil Abstract - Convolutional Neural Networks (CNNs) have been widely and successfully applied to bioacoustic and passive acoustic monitoring tasks, including soundscape classification. However, the high dimension ality of CNN-derived embeddings often results in increased computa tional cost and reduced efficiency, particularly in iterative learning frame works such as Active Learning (AL) and in scenarios with limited labeled data. This work addresses these limitations by proposing a method for adapting CNN architectures to generate compact and discriminative em beddings tailored to soundscape data classification. The proposed ap proach leverages transfer learning and incorporates three progressively reduced dense layers (512, 256, and 128 neurons), enabling dimensional ity reduction to be learned intrinsically during network training rather than applied as a post-processing step. Experimental evaluations con ducted across multiple soundscapes datasets under the Active Learning paradigm demonstrate that the proposed embeddings consistently out perform conventional CNN embeddings (CNNE) in terms of classification performance and the efficient use of labeled data. These results indicate that integrating dimensionality reduction directly into CNN training en hances representation quality and robustness, offering an effective solu tion for soundscape data classification in labeling-constrained environ ments.
Authors - Domenico D’Uva Abstract - Indoor air quality (IAQ) is a frequently overlooked determinant of health in rural villages, where the extensive use of solid fuels for cooking and space-heating generates elevated concentrations of airborne pollutants. This study presents an integrated, low-cost protocol for improving IAQ in rural dwellings, combining real-time environmental monitoring, simplified digital modelling and passive strategies of ventilation and biophilic design. The methodology can be structured into three steps: Conceptual digital twin, feedback interface, ventilation strategies, biophilic integration. Conceptual digital twin is based on the mapping of each dwelling linked to Arduino low-cost, stand-alone sensors (CO₂, PM₂.₅, temperature and relative humidity) that collect data at temporal resolution of one minute. An immediate feedback interface based on visual and/or acoustic indicators that prompt residents to take corrective actions (selective opening of windows, activation of cross-breezes), when exposure thresholds - derived from WHO Air Quality Guidelines - are exceeded. Data-driven natural-ventilation strategies – optimal ventilation windows identified through time-series analysis of sensor data, calibrated to local weather conditions and occupancy profiles to maximise air exchange while minimising heat losses. Biophilic integration implies the introduction of resilient plant species with proven phytoremediation capacity, as Epipremnum aureum) which could reduce CO₂ level, with quantitative guidance on density (two to three plants per main room) and optimal placement. Using low-cost IoT sensors, the protocol monitors environmental parameters and pollutant concentrations in real time. The system targets specific safety and comfort thresholds, aiming to maintain CO₂ levels below 700 ppm and PM₂.₅ below 50 μg/m³ to optimize occupant health (Wu et al, 2021). These thresholds, derived from World Health Organization (WHO) guidelines, are essential to ensure occupant satisfaction and well-being. The ultimate objective is to define a scalable and replicable intervention model capable of combining digital technologies and natural solutions for the sustainable regeneration of fragile territories.
Authors - Kritika Singhal, Khushi Madeshiya, Utkarsh Upadhyay, Siser Pratap Singh, Surendra Kr. Keshari, Veepin Kumar Abstract - The integration of artificial intelligence in the academic en vironment has been rapidly growing since late 2022. One of the most widely adopted artificial intelligence tools in engineering is the large lan guage model. By using large language models, the engineering students can generate assignment answers, solve problems through code, and ex plain engineering concepts. Unlike traditional approaches, the large lan guage models can reduce time and simplify the students’ work. Many researchers have worked on artificial intelligence tools, most specifically large language models for engineers. This paper reviews the literature on the application of artificial intelligence tools in the following five areas of engineering education, which include programming, problem-solving in the core subjects, intelligent tutoring, technical writing, and simula tion support. Further, this paper discusses the main challenges of large language models in engineering education. Finally, this article concludes by outlining the future scope of large language models in engineering.
Authors - S.Venkata Rakesh, K.Tarun Kumar, A.Lohith, M.Nirupama Bhatt Abstract - One of the world's most destructive types of malware is ransomware, which results in huge financial and data loss around the globe. Current signature-based detection methodologies do not work for the detection of these types of ransomware because they have no way to identify them prior to their creation (zero-day) or when a variant of the ransomware is created (polymorphic). A behaviour-based ransomware detection methodology that involves the use of CPU Hardware Performance Counters (HPC) in combination with machine learning models for the purpose of detecting ransomware activity is the focus of this project. The following HPC metrics will be used to monitor the execution of a program or application while it is executing: instruction count; cache references; cache hits; branch instructions; and CPU cycles. These low-level architectural events will provide information on the unique behaviour characteristics of a ransomware program or application based on the types of behaviours exhibited by the encryption pro-cesses of a ransomware program or application. A labelled dataset of HPC traces of typical programs/applications will be developed by running both standard pro-grams/applications and ransomware in a controlled testing environment. Several supervised learning models such as Random Forest, Support Vector Machines, and Logistic Regression will be trained and validated on the labelled dataset. The experimental results show that ransomware activity causes significantly different HPC metrics, thereby allowing the correct identification of ransomware. The pro-posed methodology will offer a real-time, graphical user interface for real-time monitoring and graphical representation of the detected ransomware program or application.
Authors - Vasavi Ravuri, S. Lalitha Geetanjali, T. Bhavana Sri, V. Praveen, M. Mokshgna Teja Abstract - Unstructured vehicle traffic (i.e. those containing multiple users such as automobile drivers, pedestrians, cyclists, and even animals) creates a significant challenge for road safety. This work presents the development of a real-time road risk assessment (RRA) system for analyzing dashcam video that combines several computer vision techniques: object detection, semantic segmentation, multi-object tracking, and alert classification, into a unified, integrated processing pipeline. Object detection and multi-object tracking are accomplished using the YOLOv8m and ByteTrack with Kalman Filter algorithms. Additionally, semantic segmentation of the road scene is achieved using a SegFormer-B2. Finally, a segmentation-assisted fusion filter and perspective-aware danger zone are applied (to define each point in the field of view as belonging to a zone with certain levels of risk). The Road Intrusion Risk Score (RIRS) is a composite score that quantifies the severity of intrusion accumulated over time, and provides graduated alert levels. Testing of the system on COCO val2017 and four dashcam videos produced reliable object detections with significantly fewer false positives and very close to real-time performance, demonstrating the potential of the system to improve driver assistance systems in unstructured road environments.
Authors - Nathula Dayarathne, Guhanathan Poravi Abstract - This paper presents a novel methodology for predicting bug severity and priority in software development using machine learning models. The approach involves leveraging a manually curated dataset labelled with the support of industry experts, enabling the incorporation of domainspecific knowledge into feature selection and classification. A K-Means clustering method is initially employed to label the collected data, ensuring accurate grouping and feature extraction. The study identifies and utilizes 16 key features for classification and develops separate models for severity and priority prediction. These models, trained on the expertly labelled dataset, achieve high performance with accuracy metrics above 90%. This study uniquely combines K-Means pre-labelling with expert validation to reduce manual annotation while maintaining model accuracy. The proposed method demonstrates the effectiveness of combining clustering techniques with expert-driven labelling for improving bug management processes. By automating severity and priority classification, this research contributes to enhancing the efficiency and reliability of software development workflows.
Authors - Dhanashri Amol Gore, Satish Narayanrav Gujar Abstract - The wide use of machine learning in the field of medical imaging has caused concern with regard to patient information security, especially when mod els are being trained over multiple health care systems in a distributed manner. Centralized learning requires transferring raw patient data to a central server where there is an extreme risk of data breach and unauthorized access to patients' personal information. Violations of health care regulations (HIPAA and GDPR) can occur in a centralized system because of the transfer of patients' data. Feder ated Learning (FL) addresses these issues by allowing collaborative model de velopment on individual client devices. Therefore, the sensitive patient data will remain at its source institution. This paper provides a thorough comparative study of centralized learning and federated learning methods for detecting pneumonia utilizing chest X-rays from the publicly available Kaggle Chest X-Ray Pneumo nia dataset. Three architecture types (Support Vector Machine (SVM), Convolu tional Neural Network (CNN) and Long Short-Term Memory (LSTM)) were tested in both centralized and federated environments utilizing the FedAvg ag gregation method. Only the model weights were shared between the clients and the central server; therefore, patient data was maintained private through the en tire model training process. Experimental results demonstrated that federated learning produced superior performance than centralized learning for all three architectures (81.1%, 84.6%, and 82.7% for SVM, CNN and LSTM respec tively). The performance metrics for centralized learning were 76.6%, 76.3%, and 81.6%. This superior performance of FL is attributed to the inherent regular ization effect of local class-balancing within the federated clients that reduces the inherent class imbalance in the dataset. Overall, our research demonstrates that FL is not only a viable privacy-preserving solution to centralized training but offers improved generalization in the medical imaging domain with imbalanced classes and is a suitable solution for application in distributed health care envi ronments.
Authors - Vu Nguyen, Chau Vo Abstract - Artificial intelligence (AI) offers powerful capabilities for understanding stakeholder perceptions of corporate sustainability initiatives. This study investigates how AI‑driven sentiment analysis can support sustainable business decision‑making by analyzing secondary data from social media platforms, online re-views, and ESG reports. Using advanced text mining and trans-former‑based sentiment classification techniques, the research identifies patterns in public opinion regarding environmental, social, and governance practices across industries. Topic modeling is applied to detect emerging sustainability themes, while sentiment trend analysis provides actionable insights for improving stakeholder engagement and brand reputation. The findings reveal how organizations can leverage real‑time sentiment data to guide strategic investments, enhance communication strategies, and strengthen commitment to green practices. By integrating AI‑based natural language processing with sustainability management, this research contributes to evidence‑based decision‑making frameworks that enable businesses to respond effectively to societal expectations and achieve long‑term competitive and environ-mental advantages.
Authors - A. Viji Amutha Mary, Ram Swagath B, Ruthresh E, S Jancy, B. Shamreen Ahamed Abstract - As one of the most damaging natural risks, earthquakes require quick situational consciousness for emergency response as well as control. Usual impact assessment methods use larger on field surveys conducted after a disaster, which delays decision making and results in a poor comprehension of damaged zones. An automated analysis pipeline processes high resolution imagery from satellites and land based seismic data to extract land use change patterns, information on terrain change in shape and signs of structural damage. An XGBoost model is then used to classify the extracted spatial features, estimate severe levels and produce dynamic earthquake risk maps. During seismic emergencies, the system supports resource distribution and rescue planning by enabling quicker and more accurate estimation of open areas. The suggested hybrid model greatly outperforms traditional disaster assessment techniques in terms of accuracy, processing speed or scalability, according to experimental evaluation, underscoring its potential to transform preventive earthquake disaster management as well as prepare strategies.
Authors - Shital Waghamare, Swati Shekapure, Girija Chiddarwar, Shital Waghamare Abstract - Public administrations generate extensive administrative data through routine governance processes yet it is weakly based on verifiable evidence. This paper introduces a human-centric policy intelligence system based on execution-level administrative data for provision of accountable and evidence-based policy-making. The framework brings together governance-conscious data ingestion, cryptographic hash-based verification including permissioned blockchain systems to control the integrity of data, cross-domain data harmonisation to overcome administrative silos, and explainable machine learning models to create interpretable supporting insights. The framework is specifically meant as a human-in-the-loop system, maximizing policy foresight, administrative discretion, and accountability to the law. The validation with actual Mahatma Gandhi National Rural Employment Guarantee Act administrative data of the year 2022–2023 proves that the framework can be used to stress the implementation issues and regional inequalities without computerising policy-related decisions. The suggested solution is lightweight, scaled down to fit in the existing open-sector digital infrastructure.
Authors - Zarif Bin Akhtar, Ifat Al Baqee Abstract - Recent advancements in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) have accelerated the capabilities of Computer Vision (CV) across domains such as healthcare, autonomous systems, manufacturing, and intelligent surveillance. This research exploration presents a comprehensive investigation into the technological evolution, practical applications, and ethical implications of modern CV systems. Through a mixed-methods approach combining available knowledge analysis, empirical model evaluation, and expert interviews, the study assesses the performance of state-of-the-art architectures including Convolutional Neural Networks (CNN), Vision Transformers, YOLO-based detectors, and diffusion models—across diverse real-world deployment scenarios. Experimental findings highlight significant improvements in image classification, object detection, semantic segmentation, autonomous navigation, driven by techniques such as transfer learning, ensemble modeling, and model optimization for edge devices. Despite these advancements, challenges persist regarding data quality, interpret-ability, bias, and privacy, particularly in high-stakes environments. The study emphasizes the need for responsible AI governance, human-centric design, and standardized regulatory frameworks to ensure safe and equitable adoption of visual AI. Furthermore, emerging trends such as multi-modal learning, edge-based inference, and foundation models are discussed as catalysts for the next generation of contextaware and resource-efficient CV systems. This work provides a holistic perspective on current CV capabilities, identifies key limitations, and outlines strategic future directions for developing robust, sustainable, and ethically aligned AI-driven vision technologies.
Authors - Fernando Latorre, Ivan Becerro, Nuria Sala Abstract - The rapid expansion of interconnected networks, cloud infrastruc tures, and IoT environments has significantly increased the complexity of mod ern cyber threats, necessitating intelligent and adaptive Intrusion Detection Sys tems (IDS). While machine learning and deep learning techniques have im proved detection accuracy, their black-box nature limits transparency, interpret ability, and analyst trust in high-stakes cybersecurity environments. This lack of explainability hinders forensic validation, regulatory compliance, and resilience against adversarial manipulation. To address these challenges, this paper pre sents a comprehensive survey of Explainable Artificial Intelligence (XAI) tech niques applied to IDS and proposes a reference hybrid architecture that inte grates deep packet inspection, dual-model detection, multi-level explanation mechanisms, adversarial robustness monitoring, and governance-aware logging. The architecture combines high-performance deep learning models with inter pretable components and an explanation fusion engine to balance detection ac curacy with transparency. Furthermore, security implications such as explana tion leakage and adversarial manipulation are analyzed. The study highlights evaluation metrics, open challenges, and future research directions toward trustworthy and transparent cybersecurity systems. The findings emphasize that secure explainability is essential for next-generation IDS deployment in distrib uted and resource-constrained environments.
Authors - Sanjay Kumar, Vimal Kumar, Sahilali Saiyed, Pratima Verma, J.R. Ashlin Nimo Abstract - As online shopping has become increasingly popular, companies must utilize social media to develop and improve customer experience. This study examined customer interaction sentiment regarding online shopping through automated systems to classify comments on social media sites like Twitter, Facebook, and Instagram. This research study compared three machine learning and natural language processing (NLP) techniques: Bidirectional Gated Recurrent Units (GRUs), Random Forests, and Naïve Bayes. Customer reviews were classified as positive, negative, and neutral, as well as analyzed for time-related patterns. The classification framework was constructed by using sentiment analysis, feature extraction, and data preprocessing techniques. Furthermore, model training and performance assessment were executed through Naïve Bayes and Support Vector Machines. Of all the models studied, the Bidirectional GRU had the best performance with an accuracy of 88.08 %. The results of this study help companies understand customer preferences better, and thereby refine their products, services, and marketing techniques.
Authors - Tanmoy De, Vimal Kumar, Pratima Verma Abstract - The traditional centralized insurance operation has contributed to insurance fraud due to poor identity verification systems, fragmented data sharing, and slow manual validation, all leading to substantial financial loss and loss of faith in the integrity of the operation. This research aims to develop a framework for an insurance operation that provides security, transparency, intelligence, and improved fraud detection accu- racy while meeting the privacy and interoperability needs of insurers and their related stakeholders. The proposed framework is a decentralized solution that employs blockchain, self-sovereign identity (SSI), artificial intel- ligence (AI), and federated learning to create secure identity cre- ation processes, transparent policy management, and intelligent verification of claims. The results of experimental evaluations of the proposed framework show that it provides increased fraud detection accuracy, reduced duration of processes, and improvements in transparency over current processes. Thus the suggested method improves efficiency and trust in insurance ecosystems and can be applied to real-world implementations with sophisticated identity integration and extensive blockchain networks.
Authors - A. Viji Amutha Mary, S. Chanikya, S Gayathri Sarayu, S Jancy, B. Shamreen Ahamed Abstract - This work presents an intelligent solution to render residential garages more secure and safer. We developed an IoT platform to address frequent. homeowner issues, including leaving the accidentally. garage door open, looking to know whether it is your car, or noticing anything unusual. At its core, the system uses an internet connected ESP 32 microcontroller through Wi-Fi. In order to identify a vehicle inside, we added an ultrasonic sensor which calculates the proximity to the closest object. A simple magnetic switch, mounted on the garage door indicates when the door is ajar or closed. Our software processes these readings, and puts logic to alert you whether the door has been long or long been opened when your car is not home, which poses a possible security threat. An extra optional motion sensor may also be added. guards in case of any unforeseen motion in the garage.
Authors - Ashavaree Das, Dimo Valev, Sambhram Pattanayak, Prashant Kamal Abstract - The rise of short-form video (SFV) platforms like TikTok, Instagram Reels, and YouTube Shorts has caused a fundamental shift in digital marketing, moving from static images to engaging, multimodal strategies. These platforms utilize advanced "interest-graph" algorithms and unique user interfaces that significantly alter consumer attention spans and engagement patterns. Traditional marketing metrics often fall short in these environments, requiring new approaches that emphasize immediacy and authenticity. This paper explores the key intersection of algorithmic recommendation biases, content memorability, and technical video quality. To address these challenges, we propose an integrated framework that combines advanced blind video quality assessment (BVQA) with generative enhancements to optimize content for short-form formats. By incorporating technical insights from affective computing and recommender systems alongside strategic marketing goals, this study explores how "lo-fi" aesthetics and influencer-led credibility influence consumer attitudes. Our findings offer a roadmap for managing user-generated content (UGC) and algorithmic biases to enhance brand resonance and purchase intent in today's digital economy.
Authors - Sreenath M. V., Abhigna Suresh Babu, Addanki Naga Sai Greeshmitha, C. R. Ananya, Lakshmi M., Mohan S. G. Abstract - Conventional recipe formats interrupt cooking workflows by requiring repeated attention shifts to external devices. This paper presents Beyond the Cookbook, a Mixed Reality (MR) cooking assistant developed for Meta Quest headsets. The system delivers spatially anchored, context-aware instructions using persistent holographic overlays, synchronized narration, and multimodal interaction including voice commands, controller input, and hand-tracking gestures. By integrating passthrough MR and spatial mapping, the assistant enables hands-free and hygienic guidance directly within the user’s kitchen environment. A usability study with twenty-one participants demonstrates high interaction reliability, instructional clarity, and user confidence. The results validate the feasibility of MR-based procedural learning support in domestic settings.
Authors - Dinesh O. Shirsath, Swati V.Sankpal Abstract - This paper presents a hybrid denoising pipeline for multi-channel electrocardiogram (ECG) recordings. First, blind source separation (BSS) isolates putative sources (cardiac, motion, muscle, baseline drift). Second, each separated component is represented sparsely in a suitable transform or learned dictionary; small / noise-dominated coefficients are attenuated and the component reconstructed. Finally, recombination yields a denoised ECG that preserves waveform morphology while suppressing compound, nonstationary noise. The paper describes the mathematical model, algorithmic steps, implementation tips, evaluation metrics, and practical considerations for deployment.
Authors - Aarya Sagar Sonawane, Rutuja Rajendra Thorwat, Shravani Rajeev Deshpande, A. R. Bankar Abstract - A significant security issue facing organizations is insider threats since one has access to privileged information and the behavior of users keeps evolving. Current solutions can be un-explainable, unable to manage new behavior patterns, generate high false positives, and un privacy friendly because of centralized data analysis. To solve these problems, this paper presents EXPLAIN-ITD, an explainable, adaptive and privacy-aware artificial intelligence system to detect insider threats. The framework is an integration of multi-modal data fusion, dual memory continuous learning, explainable risk scoring, human feedback in the loop and federated learning and differential privacy. As the exper imental findings have demonstrated, EXPLAIN-ITD has a better level of accuracy in detection, a lower level of false alarms and better interpreta bility than the current approaches.
Authors - Kamalakar S, Anjan Babu G, Ravi Kumar G Abstract - Artificial intelligence has become an important tool for addressing environmental challenges because it can analyze large datasets, detect patterns, and support accurate predictions. As climate change increases pressure on natural and built environments, organizations adopt AI to improve monitoring, optimize resource use, and inform sustainability decisions, though research remains fragmented. This review examines studies from 2020 to 2025 and assesses how AI is applied in renewable energy, water management, agriculture, waste management and the circular economy, and environmental health and public safety. A major objective of this synthesis is to highlight commonly employed functions by researchers and practitioners such as forecasting, anomaly detection, and operational optimization, alongside emerging model frameworks that strengthen environmental management. While AI offers meaningful benefits, it also presents challenges related to governance, transparency, and the energy demands of large scale models. This review consolidates developments and identifies priorities for future research.
Authors - Anil Kumar Bandani, Anupama Bollampally, Ramesh Deshpande B Saritha, P Rajesh Abstract - Transformer-based models in modern applications struggle with continual learning due to catastrophic forgetting. This paper presents Lapis Whale, a framework that incorporates a Selective Replay Utilization Mechanism (SERUM) to help a model retain previously learned knowledge while adapting to new tasks. The approach leverages a memory buffer to replay representative samples from earlier tasks during training. Experiments on the CIFAR-100 dataset show improved accuracy retention and reduced forgetting compared to standard fine-tuning methods. The framework is computationally efficient and well-suited for real-world adaptive AI systems.
Authors - Suman Kumar Mandal, Wendrila Biswas, Jaydev Mishra Abstract - Glaucoma is an optic neuropathy that is progressive and one of the most common causes of permanent blindness in the world. The retinal fundus images used to diagnose the condition are still time-consuming and highly reliant on the clinical expertise to detect the condition early, before the loss of vision becomes severe. In this experiment, we suggest a deep learning model that will use the ResNet50 architecture to identify retinal fundus images as belonging to one of two categories: Referable Glaucoma (RG) and Non-Referable Glaucoma (NRG). ResNet50 has been selected because it has good feature ex-traction (residual learning and deep convolutional learning). The standard performance measures were used to assess the trained model, such as accuracy, precision, recall, F1-score, and area under the ROC curve. The experimental findings indicate that the suggested approach yields consistent and accurate classification of RG and NRG cases, and it can be used to assist the ophthalmologist in clinical decision-making. The paper demonstrates how deep learning models could assist in further development of early glaucoma detection and mass screening, which, in their turn, can contribute to better patient outcomes and prevention of blindness before its onset.
Authors - S. Jayaraj, G. Anjan Babu, Krishnamurthy Kavitha Abstract - As neurodegenerative diseases like Huntington’s become a global health priority, the difficulty of early and accurate radiological diagnosis remains a significant hurdle. While Deep Learning, predominantly CNNs (Convolutional Neural Networks), offers a clarification for medical image classification, performance is often hindered by the inadequacy of high-grade datasets. This research addresses these limitations by proposing an ensemble deep learning model that integrates ResNet, MobileNet, and VGG16 architectures. By combining these networks, the study achieves enhanced robustness and superior classification accuracy compared to standalone models. This automated framework serves as a vital clinical support tool, enabling faster interventions, improved treatment planning, and a reduction in the global burden of neurodegenerative disorders [10,12].
Authors - Abhijit Dnyaneshwar Jadhav, Prashant G. Ahire, Madhuri Hiwale Abstract - A significant security issue facing organizations is insider threats since one has access to privileged information and the behavior of users keeps evolving. Current solutions can be un-explainable, unable to manage new behavior patterns, generate high false positives, and un privacy friendly because of centralized data analysis. To solve these problems, this paper presents EXPLAIN-ITD, an explainable, adaptive and privacy-aware artificial intelligence system to detect insider threats. The framework is an integration of multi-modal data fusion, dual memory continuous learning, explainable risk scoring, human feedback in the loop and federated learning and differential privacy. As the exper imental findings have demonstrated, EXPLAIN-ITD has a better level of accuracy in detection, a lower level of false alarms and better interpreta bility than the current approaches.
Authors - Tirupathi Rao Dockara, Pradeep Rajagopal Kirthivasan Abstract - Healthcare data scarcity poses significant challenges for machine learning applications in clinical settings, particularly for conditions with limited patient populations. This paper presents a novel quantumenhanced data augmentation framework that addresses this challenge through a three-pillar architecture: Quantum Random Number Generation (QRNG) for true randomness, Statistical AI for intelligent parameter optimization, and Generative AI for clinical interpretability. Our implementation utilizes Bell state quantum circuits to generate genuinely random perturbations, ensuring higher entropy than classical pseudorandom methods. The framework incorporates medical domain knowledge through constraint-aware augmentation, maintaining clinical validity while generating synthetic patient records. Experimental evaluation on the Pima Indians Diabetes dataset (768 samples, 8 features) demonstrates that our quantum-enhanced approach achieves 100% medical constraint compliance while generating high-quality synthetic data. The system provides both command-line and web interfaces, with automatic fallback to classical methods when quantum resources are unavailable. Our contributions include: the first practical application of quantum computing to healthcare data augmentation, an AI-driven optimization system that automatically determines augmentation parameters, integration with large language models for non-technical summarization of validation reports, and a production-ready implementation with comprehensive validation mechanisms. The framework represents a significant advancement in synthetic medical data generation, offering a scalable solution for addressing data scarcity in healthcare AI applications.
Authors - Jyotiprakash Mishra, Sanjay K. Sahay, Swati Mishra, Aman Pathak Abstract - Memory encryption is a key security requirement for modern computing systems, addressing vulnerabilities between CPUs and main memory. Traditional storage encryption is insufficient for protecting volatile data in RAM, which remains exposed to bus sniffing, cold boot attacks, and side-channel exploits. This paper therefore systematically reviews memory encryption techniques focused on hardware-based solutions like Intel Total Memory Encryption (TME), Multi-Key TME, and AMD Secure Memory Encryption, which provide robust protection while minimising performance overhead. The paper also explores integrity protection via Merkle trees and side-channel countermeasures against Differential Power Analysis and Simple Power Analysis attacks. Additionally, granular memory encryption methods for multi-tenant environments are discussed, highlighting their role in isolating sensitive data across security domains. By examining security guarantees and performance trade-offs, we emphasise the necessity of efficient memory encryption to safeguard against evolving threats targeting the CPU-memory interface, providing hardware engineers a foundation for ensuring data confidentiality and integrity.