Assistant Professor, Faculty of Science and Technology, Datta Meghe Institute of Higher Education & Research (Deemed to be University), Maharashtra, India
Authors - Morveen Bamania, Anilkumar Patel, Yassir Farooqui Abstract - Digital image manipulation has become sophisticated day by day with the help of advanced editing tools. This posing significant challenges to image authenticity verification and raising a critical concern in the field of legal proceedings, social harmony, scientific publications, forensic and law enforcement, healthcare and journalism. In this paper we implement a unique and novel approach for the detection of image forgery. We use Convolutional Autoencoder (CAE) combined with Error Level Analysis (ELA). Our proposed preprocessing pipeline follows the sequence: resize the input image and pass through ELA apply denoise method. Where Gaussian denoising is strategically applied to the ELA output rather than the original image to preserve forgery artifacts while reducing noise. The CAE architecture consists of a four-block encoder that compresses input images into a 128- dimensional latent space, a symmetric decoder for reconstruction, and a fully connected classifier for binary forgery detection. The model is trained using a combined loss function. One is Mean Squared Error (MSE). It helps for reconstruction. The other one is Binary Cross- Entropy (BCE). It improves its ability to correctly classify. Experimental evaluation on the CASIA v2.0 dataset demonstrates the effectiveness of our approach. It is achieving competitive accuracy, precision, recall, and F1-score metrics. The proposed method successfully identifies both copy-move and splicing forgeries. It identifies the forgeries by analyzing compression artifact inconsistencies revealed through ELA.
Authors - Albert Manamela, Tevin Moodley Abstract - Student retention is critical for academic quality and institutional effectiveness, especially in programs where foundational natural science courses such as mathematics, physics, and chemistry strongly influence progression and pose significant challenges. Early dropout identification in these contexts requires predictive models that are both accurate and interpretable. This study proposes an interpretable machine learning framework for student dropout prediction using academic, financial, and demographic data. It combines cost-sensitive XGBoost with Shapley Additive exPlanations (SHAP), addressing class imbalance without synthetic oversampling to preserve authentic performance patterns. Using a benchmark dataset from the Polytechnic Institute of Portalegre, the model achieved strong performance (Accuracy = 89.6%, F1 = 0.834, AUC-ROC = 0.934). SHAP analyses identified academic engagement, tuition payment status, and scholarship access as key predictors. The findings support transparent early-warning systems and inform policies to improve retention, strengthen support in science-based learning environments, and promote equitable student outcomes.
Authors - Aqdas Hassan, Farooque Azam, and Muhammad Waseem Anwar Abstract - The RISC-V Vector Extension (RVV) enables scalable data-parallel processing through a flexible vector length architecture, offers a standardized and scalable approach to vector computing. Derived from an analysis of existing RVV architectures, this paper presents a focused architectural study and implementation of a basic RVV-based vector extension. Unlike complex, high-performance designs, the proposed architecture prioritizes simplicity and clarity, implementing only essential vector arithmetic and memory instructions. The vector extension is integrated with a single-cycle scalar RISC-V core, and instruction decoding is implemented and verified at RTL level. Functional simulation confirms correctness of RVV instruction decoding. This work bridges the gap between theoretical RVV studies and practical step-by-step hardware implementation.
Authors - Harshwardhan Singh Rathore, Dev Krishan, Amit, Abhinav Vyas, Harshit Choudhary, Kunal Chittora, Vishal Shrivastava, Ram Babu Buri, Akhil Pandey, Mukesh Mishra Abstract - Predicting protein–ligand binding affinity is an essential step in early drug discovery. We present Alchemy, a ligand-centric Graph Neural Network (GNN) framework for predicting binding affinities (pKd/pKi) from molecular graphs and a production-ready web interface for easy inference. Using a curated subset of the PDBbind dataset for prototyping and RDKit for cheminformatics preprocessing [6], we implement a message-passing GCN model with global pooling and train it using MSE regression. We evaluate model performance using RMSE, MAE, Pearson and Spearman correlations, and Concordance Index, and compare against docking scores and classical ML baselines. On the demo subset our model achieves an RMSE of X (±Y) and Pearson r of Z (±W) — results that highlight the potential and limitations of ligand-only approaches. We discuss data-scaling, protein incorporation strategies, ablation studies, and provide reproducible code and a web app to facilitate adoption.
Authors - K Bhavish Raju, K Musadiq Pasha, Mohammed Saqlain, Nishaan Padanthaya, Jayashree R Abstract - Neuro-degenerative disorders, particularly Alzheimer’s Disease (AD), pose a significant challenge in early diagnosis and severity assessment due to overlapping symptoms with conditions such as Mild Cognitive Impairment (MCI) and Cognitively Normal (CN) conditions. Accurate differentiation between these stages is essential for timely intervention but remains difficult due to the progressive and heterogeneous nature of these disorders. Traditional machine learning models struggle to effectively integrate diverse data modalities, such as medical imaging (MRI) and clinical tabular data. This study proposes Hypergraph Neural Networks (HyperGNNs) based framework to enhance multi-modal classification and disease severity modeling. By representing complex patient relationships as hypergraphs, our approach aims to improve diagnostic accuracy, reduce misdiagnosis, and provide an interpretable framework for understanding disease progression. To ensure clinical transparency, we incorporate explainability techniques such as SHAP and Grad-CAM to ensure model transparency, enabling clinicians to understand key features influencing predictions. The model will be evaluated on standard neuro-imaging datasets and clinical records, offering potential applications in personalized medicine and early intervention strategies.
Authors - Zhou Xu, Shuzlina Abdul Rahman, Norlina Mohd Sabri, Rogayah Abdul Majid Abstract - For the integration of solar systems within the power grid, there is the requirement for smarter systems that are capable of not only detecting faults but also optimizing their performance. The current paper introduces an innovative hybrid method that focuses on the detection of solar thermal faults and adaptive grid control, where the challenge had existed in the separation of the two aspects. This is achieved through the use of a deep learning U-Net model, where different kinds of solar panel fault types, such as single and multi hotspots, are detected from grayscale thermal images. The different kinds of fault types identified are used as a reinforcement learning approach (PPO), where decisions regarding safe and efficient use of the grid are made while considering fault awareness. Higher priority is granted to critical fault types through rewards that use penalties. It also comes with an immediate safety function to isolate faulty panels with zero delay for smooth and efficient function of the solar energy grid.
Authors - Shubhrat Chaursiya, Toshif Mohammed Shaikh, Snehlata, Sangam Kumari, Vishal Shriastava, Ram Babu Buri, Vibhakar Pathak Abstract - Computational modeling is essential for studying complex pedestrian dynamics under emergency conditions. This paper presents the design and implementation of an Emergency Evacuation Simulator, a robust grid-based modeling tool developed in Java. The system integrates two core components: an Agent-Based Model (ABM) for pedestrian behavior and Cellular Automata (CA) for modeling dynamic hazard propagation (Fire and Smoke spread). A key innovation is the use of an Optimized Breadth-First Search (BFS) algorithm coupled with 8directional pathfinding (Chebyshev distance), which significantly improves path efficiency and movement realism compared to traditional 4-directional methods. The simulator incorporates heterogeneous agents with varying vulnerability levels and features local collision avoidance. Experimental analysis confirms the efficiency of the 8-directional path finding and provides quantitative metrics on evacuation time, rate, and fatality statistics, offering a valuable platform for enhancing building safety protocols and emergency response strategies.
Authors - Vedant Khade, Supriya Narad Abstract - The world agricultural industry is increasingly becoming more complex due to the variability of climate, increasing shortage of resources, and the demand to obtain real-time and localized information. The conventional agricultural extension services that have been hindered by operational limited costs and low ratios of the farmers to experts tend to fail to provide the required advice at the right time and in a more personalized way especially to the smallholder farmers in the remote and resource-limited locations. The present paper examines the new and disruptive position of the AI-based farmer support chatbots as a scalable, effective, and ubiquitous response to this issue. They offer 24/7, multi-lingual, and highly context-sensitive advice on a wide range of issues, including complicated crop management protocols, early pest and disease detection, live market price tracking, and navigation of complicated government subsidy programs, using their sophistication in Natural Language Processing (NLP), advanced Machine Learning (ML) algorithms and Computer Vision (CV). The study conducts a synthesis of the existing technological practices and provides important quantitative evidence, including these findings; (a) large-scale changes in the profitability of farmers, yield maximization, and efficient resource use; (b) the critical analysis of the technical and socio-ethical issues, including the bias of the data, the lack of digital literacy, and the accountability systems. The paper concludes by offering an assumption that although rigorous, responsible, and ethical development is the most important, farmer support chatbots are not merely the instruments of the incremental change, but should be the ones that will radically transform agricultural knowledge dissemination, which will subsequently result in more resilient, productive, and sustainable global food systems.
Authors - Aditya Kasture, Supriya Narad Abstract - The pace of change of the software development industry is unprecedented as the introduction of AI code generation tools has not only doubled the productivity of developers by up to 55 percent but also introduced the industry with a new problem of exponential growth in the complexity of the code and technical debt. The former techniques of code review are monotonous, infrequent and time consuming. Such an approach cannot validate the mammoth amounts of gains that are evident in an AI-oriented development cycle. The structure, performance, and service of AI-Accelerated Code Review (AACR) Platforms, which we discuss in this paper, would be the last mile of quality control that would be the solution to this so-called paradoxical situation of such engineering productivity. We propose an AACR system, which is built on a Multi-Agent Architecture with Large Language Models (LLM) to accomplish contextual and reasoning problems, custom machine learning (ML) models to evaluate security and performance, and a code graph analysis to obtain a good composition of the codebase. We conclude that median code review time is an option to decrease by 40-60 per- cent with the AACR platforms. Besides, the accuracy of the detection of the defects can also rise in comparison with the old method of analyzing and reviewing of the data manually. The article relies on the primary argument presented in the description above and the debates concerning the unlawful use of AI generated data and the in- creased use of AI.
Authors - V. R. Badri Prasad, Shrujana Patil, Shreeraksha, Prathik S. Hanji, S Vikas Vathsal Abstract - Traditional object detection systems are limited in their ability to capture the complexity of urban scenes, often overlooking critical spatial, contextual, and functional relationships required. This paper introduces Urban Scene Intelligence, a Semantic Anchor-and-Expand (SAE) framework that integrates multi-modal perception, structured scene graph construction, and controlled narrative generation to produce grounded descriptions of urban environments. The proposed modular architecture incorporates OWL-ViT for open-vocabulary object detection, SegFormer for semantic segmentation, DepthAnything for spatial depth estimation, Qwen2-VL for attribute enrichment, and OCR for extracting textual context. Unlike end-to-end multimodal models, the threestage pipeline explicitly separates visual perception, symbolic reasoning, and language generation, thereby improving interpretability and factual grounding. By unifying heterogeneous visual cues into a symbolic representation and generating context-aware descriptions from this representation, the SAE framework establishes a transparent and extensible approach to urban scene understanding in complex real-world environments.
Assistant Professor, Faculty of Science and Technology, Datta Meghe Institute of Higher Education & Research (Deemed to be University), Maharashtra, India