Authors - Dhanashri Amol Gore, Satish Narayanrav Gujar Abstract - The wide use of machine learning in the field of medical imaging has caused concern with regard to patient information security, especially when mod els are being trained over multiple health care systems in a distributed manner. Centralized learning requires transferring raw patient data to a central server where there is an extreme risk of data breach and unauthorized access to patients' personal information. Violations of health care regulations (HIPAA and GDPR) can occur in a centralized system because of the transfer of patients' data. Feder ated Learning (FL) addresses these issues by allowing collaborative model de velopment on individual client devices. Therefore, the sensitive patient data will remain at its source institution. This paper provides a thorough comparative study of centralized learning and federated learning methods for detecting pneumonia utilizing chest X-rays from the publicly available Kaggle Chest X-Ray Pneumo nia dataset. Three architecture types (Support Vector Machine (SVM), Convolu tional Neural Network (CNN) and Long Short-Term Memory (LSTM)) were tested in both centralized and federated environments utilizing the FedAvg ag gregation method. Only the model weights were shared between the clients and the central server; therefore, patient data was maintained private through the en tire model training process. Experimental results demonstrated that federated learning produced superior performance than centralized learning for all three architectures (81.1%, 84.6%, and 82.7% for SVM, CNN and LSTM respec tively). The performance metrics for centralized learning were 76.6%, 76.3%, and 81.6%. This superior performance of FL is attributed to the inherent regular ization effect of local class-balancing within the federated clients that reduces the inherent class imbalance in the dataset. Overall, our research demonstrates that FL is not only a viable privacy-preserving solution to centralized training but offers improved generalization in the medical imaging domain with imbalanced classes and is a suitable solution for application in distributed health care envi ronments.
Authors - Vu Nguyen, Chau Vo Abstract - Artificial intelligence (AI) offers powerful capabilities for understanding stakeholder perceptions of corporate sustainability initiatives. This study investigates how AI‑driven sentiment analysis can support sustainable business decision‑making by analyzing secondary data from social media platforms, online re-views, and ESG reports. Using advanced text mining and trans-former‑based sentiment classification techniques, the research identifies patterns in public opinion regarding environmental, social, and governance practices across industries. Topic modeling is applied to detect emerging sustainability themes, while sentiment trend analysis provides actionable insights for improving stakeholder engagement and brand reputation. The findings reveal how organizations can leverage real‑time sentiment data to guide strategic investments, enhance communication strategies, and strengthen commitment to green practices. By integrating AI‑based natural language processing with sustainability management, this research contributes to evidence‑based decision‑making frameworks that enable businesses to respond effectively to societal expectations and achieve long‑term competitive and environ-mental advantages.
Authors - A. Viji Amutha Mary, Ram Swagath B, Ruthresh E, S Jancy, B. Shamreen Ahamed Abstract - As one of the most damaging natural risks, earthquakes require quick situational consciousness for emergency response as well as control. Usual impact assessment methods use larger on field surveys conducted after a disaster, which delays decision making and results in a poor comprehension of damaged zones. An automated analysis pipeline processes high resolution imagery from satellites and land based seismic data to extract land use change patterns, information on terrain change in shape and signs of structural damage. An XGBoost model is then used to classify the extracted spatial features, estimate severe levels and produce dynamic earthquake risk maps. During seismic emergencies, the system supports resource distribution and rescue planning by enabling quicker and more accurate estimation of open areas. The suggested hybrid model greatly outperforms traditional disaster assessment techniques in terms of accuracy, processing speed or scalability, according to experimental evaluation, underscoring its potential to transform preventive earthquake disaster management as well as prepare strategies.
Authors - Shital Waghamare, Swati Shekapure, Girija Chiddarwar, Shital Waghamare Abstract - Public administrations generate extensive administrative data through routine governance processes yet it is weakly based on verifiable evidence. This paper introduces a human-centric policy intelligence system based on execution-level administrative data for provision of accountable and evidence-based policy-making. The framework brings together governance-conscious data ingestion, cryptographic hash-based verification including permissioned blockchain systems to control the integrity of data, cross-domain data harmonisation to overcome administrative silos, and explainable machine learning models to create interpretable supporting insights. The framework is specifically meant as a human-in-the-loop system, maximizing policy foresight, administrative discretion, and accountability to the law. The validation with actual Mahatma Gandhi National Rural Employment Guarantee Act administrative data of the year 2022–2023 proves that the framework can be used to stress the implementation issues and regional inequalities without computerising policy-related decisions. The suggested solution is lightweight, scaled down to fit in the existing open-sector digital infrastructure.
Authors - Zarif Bin Akhtar, Ifat Al Baqee Abstract - Recent advancements in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) have accelerated the capabilities of Computer Vision (CV) across domains such as healthcare, autonomous systems, manufacturing, and intelligent surveillance. This research exploration presents a comprehensive investigation into the technological evolution, practical applications, and ethical implications of modern CV systems. Through a mixed-methods approach combining available knowledge analysis, empirical model evaluation, and expert interviews, the study assesses the performance of state-of-the-art architectures including Convolutional Neural Networks (CNN), Vision Transformers, YOLO-based detectors, and diffusion models—across diverse real-world deployment scenarios. Experimental findings highlight significant improvements in image classification, object detection, semantic segmentation, autonomous navigation, driven by techniques such as transfer learning, ensemble modeling, and model optimization for edge devices. Despite these advancements, challenges persist regarding data quality, interpret-ability, bias, and privacy, particularly in high-stakes environments. The study emphasizes the need for responsible AI governance, human-centric design, and standardized regulatory frameworks to ensure safe and equitable adoption of visual AI. Furthermore, emerging trends such as multi-modal learning, edge-based inference, and foundation models are discussed as catalysts for the next generation of contextaware and resource-efficient CV systems. This work provides a holistic perspective on current CV capabilities, identifies key limitations, and outlines strategic future directions for developing robust, sustainable, and ethically aligned AI-driven vision technologies.
Authors - Fernando Latorre, Ivan Becerro, Nuria Sala Abstract - The rapid expansion of interconnected networks, cloud infrastruc tures, and IoT environments has significantly increased the complexity of mod ern cyber threats, necessitating intelligent and adaptive Intrusion Detection Sys tems (IDS). While machine learning and deep learning techniques have im proved detection accuracy, their black-box nature limits transparency, interpret ability, and analyst trust in high-stakes cybersecurity environments. This lack of explainability hinders forensic validation, regulatory compliance, and resilience against adversarial manipulation. To address these challenges, this paper pre sents a comprehensive survey of Explainable Artificial Intelligence (XAI) tech niques applied to IDS and proposes a reference hybrid architecture that inte grates deep packet inspection, dual-model detection, multi-level explanation mechanisms, adversarial robustness monitoring, and governance-aware logging. The architecture combines high-performance deep learning models with inter pretable components and an explanation fusion engine to balance detection ac curacy with transparency. Furthermore, security implications such as explana tion leakage and adversarial manipulation are analyzed. The study highlights evaluation metrics, open challenges, and future research directions toward trustworthy and transparent cybersecurity systems. The findings emphasize that secure explainability is essential for next-generation IDS deployment in distrib uted and resource-constrained environments.
Authors - Sanjay Kumar, Vimal Kumar, Sahilali Saiyed, Pratima Verma, J.R. Ashlin Nimo Abstract - As online shopping has become increasingly popular, companies must utilize social media to develop and improve customer experience. This study examined customer interaction sentiment regarding online shopping through automated systems to classify comments on social media sites like Twitter, Facebook, and Instagram. This research study compared three machine learning and natural language processing (NLP) techniques: Bidirectional Gated Recurrent Units (GRUs), Random Forests, and Naïve Bayes. Customer reviews were classified as positive, negative, and neutral, as well as analyzed for time-related patterns. The classification framework was constructed by using sentiment analysis, feature extraction, and data preprocessing techniques. Furthermore, model training and performance assessment were executed through Naïve Bayes and Support Vector Machines. Of all the models studied, the Bidirectional GRU had the best performance with an accuracy of 88.08 %. The results of this study help companies understand customer preferences better, and thereby refine their products, services, and marketing techniques.
Authors - Tanmoy De, Vimal Kumar, Pratima Verma Abstract - The traditional centralized insurance operation has contributed to insurance fraud due to poor identity verification systems, fragmented data sharing, and slow manual validation, all leading to substantial financial loss and loss of faith in the integrity of the operation. This research aims to develop a framework for an insurance operation that provides security, transparency, intelligence, and improved fraud detection accu- racy while meeting the privacy and interoperability needs of insurers and their related stakeholders. The proposed framework is a decentralized solution that employs blockchain, self-sovereign identity (SSI), artificial intel- ligence (AI), and federated learning to create secure identity cre- ation processes, transparent policy management, and intelligent verification of claims. The results of experimental evaluations of the proposed framework show that it provides increased fraud detection accuracy, reduced duration of processes, and improvements in transparency over current processes. Thus the suggested method improves efficiency and trust in insurance ecosystems and can be applied to real-world implementations with sophisticated identity integration and extensive blockchain networks.
Authors - A. Viji Amutha Mary, S. Chanikya, S Gayathri Sarayu, S Jancy, B. Shamreen Ahamed Abstract - This work presents an intelligent solution to render residential garages more secure and safer. We developed an IoT platform to address frequent. homeowner issues, including leaving the accidentally. garage door open, looking to know whether it is your car, or noticing anything unusual. At its core, the system uses an internet connected ESP 32 microcontroller through Wi-Fi. In order to identify a vehicle inside, we added an ultrasonic sensor which calculates the proximity to the closest object. A simple magnetic switch, mounted on the garage door indicates when the door is ajar or closed. Our software processes these readings, and puts logic to alert you whether the door has been long or long been opened when your car is not home, which poses a possible security threat. An extra optional motion sensor may also be added. guards in case of any unforeseen motion in the garage.
Authors - Ashavaree Das, Dimo Valev, Sambhram Pattanayak, Prashant Kamal Abstract - The rise of short-form video (SFV) platforms like TikTok, Instagram Reels, and YouTube Shorts has caused a fundamental shift in digital marketing, moving from static images to engaging, multimodal strategies. These platforms utilize advanced "interest-graph" algorithms and unique user interfaces that significantly alter consumer attention spans and engagement patterns. Traditional marketing metrics often fall short in these environments, requiring new approaches that emphasize immediacy and authenticity. This paper explores the key intersection of algorithmic recommendation biases, content memorability, and technical video quality. To address these challenges, we propose an integrated framework that combines advanced blind video quality assessment (BVQA) with generative enhancements to optimize content for short-form formats. By incorporating technical insights from affective computing and recommender systems alongside strategic marketing goals, this study explores how "lo-fi" aesthetics and influencer-led credibility influence consumer attitudes. Our findings offer a roadmap for managing user-generated content (UGC) and algorithmic biases to enhance brand resonance and purchase intent in today's digital economy.