Authors - Chinmayee Padhy, Himansu Mohan Padhy, Pranati Mishra, Nabin Kumar Nag Abstract - Establishing an institution's excellence requires measuring their innovation and research accomplishments. Tracking, verifying, and evaluating innovation and research output in an efficient manner is currently constrained by a lack of efficient reporting systems and disorganized methods of obtaining the necessary data. The creation of InnovateHub, a web-based, secure, scalable, and cloud-based platform that provides a centralized system for analysing, managing, and visualizing research and innovation throughout the world's education sector. The InnovateHub provides a central location where a single point of access can be used to collect and process all types of innovation and research information via an effective system; an interactive dashboard and analytical visualisation allows users easy access to relevant information. InnovateHub provides a role and permissions-based access control mechanism to preserve the data privacy and accountability of Administrators, Faculty, and Students. InnovateHub also supports Multi Factor Authentication (MFA) using JSON Web Tokens (JWT) for multiple layers of security and verification of user identity as well as One Time Passcode (OTP) confirmed through email, and uses cryptographic hashing to provide a form of security for storing documents and provides a biometric face-based verification system (i.e., facial recognition) to authenticate a user during critical submission phases. Automated certificate generation and contribution recognition mechanisms at InnovateHub provide additional visibility into, and motivation for, users' contributions to the platform. Utilizing the MERN Stack and AWS for Hosting of MERN Stack: Utilizing the MERN Stack (MongoDB, Express, React, Node.js) & AWS to Host a MERN Stack Application Innovative Hosting Solutions by AWS Include Amazon EC2 Instances to Host Both the Application Back End as Well as Application Front End Services and Amazon S3 for Secure and Scalable Storage of Research Document & Certificate Generation. Experimental Deployment Indicates Reliable Operation, High Availability and Secure Handling of Data During Real Time Utilization within the Loss Prevention Environment. Innovate Hub Provides Real Time Analytics, Secure Verification & Cloud Scaleability for Institutional Research Governance and the Development of a Data Driven Platform of Continuous Innovation and Growth through the Development of a Data Driven Innovation Platform.
Authors - Pranav Rao, Pranav S Acharya, Rishika Nayana Naarayan, Shreya M Hegde, Pavan A C Abstract - The rapid expansion of cloud computing, Internet of Things (IoT), 5G networks, and distributed enterprise infrastructures has significantly in creased the complexity and attack surface of modern networks. Traditional net work security mechanisms—primarily based on static rules and signature-based detection—are increasingly ineffective against advanced persistent threats (APTs), zero-day exploits, polymorphic malware, and encrypted attack chan nels. Artificial Intelligence (AI) and Machine Learning (ML) have emerged as transformative technologies capable of enabling adaptive, predictive, and au tonomous cybersecurity systems. This paper presents a comprehensive technical framework for AI-driven network security. We propose a hybrid architecture in tegrating supervised classification, unsupervised anomaly detection, and deep learning-based behavioral modeling. Mathematical formulations for intrusion detection, anomaly detection, and adversarial robustness are provided. The framework is evaluated using benchmark intrusion detection datasets, and per formance is analyzed using standard metrics including accuracy, precision, re call, F1-score, and ROC-AUC. Results demonstrate that AI-driven models sig nificantly outperform traditional signature-based approaches in detecting zero day and evasive attacks. The paper concludes by discussing adversarial machine learning risks and future directions toward autonomous and self-healing net work security ecosystems.
Authors - Rosa Cristina Pesantez, Estevan Gomez-Torres, Cesar Adrian Guayasamin Abstract - The vast implementation of cloud computing has uplifted the modern IT practices by improving scalability, flexibility, and budget efficiency. In contrast, there has been an increase in energy consumption, which results in carbon emissions. This happens because of overusage, overconsumption, overprovisioning, unused capacity, and inefficient data center management. These days, data centers act as the sole contributor to global greenhouse gas (GHG) emissions; therefore, sustainable cloud operations are essential in addressing this challenge. GreenOps, or green operations, defines the cloud deployment and operational practices that take place but also considers the environmental impact; it depicts energy-efficient infrastructure design, optimized resource usage, virtualization, and the integration of renewable energy resources. This survey presents a summary of green cloud computing, including the current trends, challenges, energy-aware scheduling algorithms, and optimization techniques for obtaining energy-efficient cloud deployment.
Authors - Govind Sambare, Sarika Deokate, Saurabh Dhakite, Sahil Ambokar, Gargi Barve Abstract - Static perimeter-based security architectures are now inef fective in the current threat scenario. The ability of attackers to obtain legitimate credentials and the presence of zero-day exploits often cause real-time breaches of the network perimeter. An area of concern is the real-time monitoring of these systems. In the current scenario, security monitoring is performed in a segregated manner, where network analysts analyze time-stamped network logs and identity analysts analyze time stamped login attempts, without cross-referencing in real time between these two domains. The proposed solution is a fusion platform capable of ingestion of raw network transport data and real-time human element monitoring data. This is achieved through the integration of two dif ferent threat detection mechanisms using a FastAPI backend. The first threat detection system will be the Network Threat Detector (NTD), im plemented in Python and using the Scapy library to parse deep packet data in real time for flow analysis. The second threat detection system will be a JavaScript tracker designed for monitoring digital behavioral indicators and calculating real-time metrics such as mouse velocities, ac celerations, kinematic jerk, and typing speeds. Real-time monitoring will be achieved through a machine learning framework with three different modules for inferring user intent using the Random Forest algorithm, detecting anomalous statistical patterns using the Isolation Forest algo rithm, and detecting malicious plaintext syntax using Logistic Regres sion. The system has been tested in a lab scenario and has been able to classify user session states into four different states: Engaged, Con fused, Frustrated and Suspicious with accuracy exceeding 95%. These digital behavioral indicators will be fed into the Network Transport Data (NTD), allowing the computation of a real-time risk score.
Authors - Duc Thinh Nguyen, Diem Huyen Nguyen Ngoc, Khoa Tran Thi-Minh Abstract - In the present-day context, presentations and computer-based interac tion play a crucial role in various domains, particularly in education and business. Traditionally, users have to rely on physical devices such as mouses, keyboards, or laser. Although these devices meet the basic requirements, they still reveal many limitations regarding mobility, continuity, and dependence on battery life. To address these limitations, hand gesture-based presentation control systems have emerged as a promising solution due to their intuitive, natural, and engaging interaction style. This paper proposes a touchless system that enables users to control common desktop operations as well as presentations in a natural manner using hand gestures captured via a standard webcam. The proposed system lev erages OpenCV for real-time video acquisition and preprocessing, while Medi aPipe framework is employed for hand tracking and landmark extraction. From the experiments, our system can process in real-time with the accuracy of approx imately 92%. As a result, users can seamlessly control slides, use virtual mouse operations, annotate presentation content, and engage with the audience in a more interactive and natural way without physical contact.
Authors - Deepali Lokare, Pankaj Chandre, Prashant Dhotre Abstract - The rapid expansion of digital services has significantly increased the collection and processing of personal data through online platforms such as e-commerce systems, social media applications, and digital payment services. To regulate the use of personal information, governments worldwide have introduced data protection regulations such as the General Data Protection Regulation (GDPR), the Digital Personal Data Protection Act (DPDPA), and the California Consumer Privacy Act (CCPA). Organizations publish privacy policies to inform users about their data practices; however, these policies are often lengthy, complex, and difficult for users to understand. Consequently, users frequently accept privacy policies without fully reviewing how their personal data is collected, processed, and shared. Recent research has explored automated approaches for privacy policy analysis using artificial intelligence techniques, including machine learning, natural language processing, and large language models. Retrieval-Augmented Generation (RAG) has further enhanced compliance evaluation by linking policy statements with relevant regulatory clauses. Despite these advancements, challenges remain, such as the lack of standardised datasets, limited explainability of AI decisions, dependence on prompt design, and insufficient validation with regulatory experts. This paper discusses future research directions in AI-driven privacy policy compliance analysis and highlights emerging opportunities for improving regulatory compliance assessment, user privacy protection, and transparent privacy governance in digital ecosystems.
Authors - Samiksha M, Sharanya G S, Shrina Anahosur, Surabhi K C, Surabhi Narayan Abstract - Multi-angle image synthesis is highly important when it comes to the generation of 3D scenes. But the current methods are either ex pensive in terms of computational costs or lack photorealism in their outputs. We propose a novel sketch and text based multiview image generation approach that solves the above-mentioned problems by mak ing use of multimodal diffusion models efficiently. Our pipeline utilises DreamShaper v8 for converting the input sketch and text into a pho torealistic 2D image and then passes this 2D image into a fine-tuned Zero123plus model for the final generation of consistent multiview im ages, showing a 43.69% improvement in the overall perceptual quality compared to baseline sketch-to-multiview models. Moreover, our pipeline shows flexibility in scalability by generating anywhere from 6 to 64 consis tent multiview images according to the requirements of the downstream tasks. We demonstrate the success of our pipeline through extensive ex periments conducted using voxel-based grid approaches and Neural Ra diance Fields (NeRF). Our pipeline greatly reduces computational costs, all while maintaining photorealism in the outputs, confirming the poten tial of sketch and text based multimodal conditioning as an intuitive and efficient paradigm for controlled 3D content generation.
Authors - Balasubramanian M, Arasu Prabhu V S, Nalini Subramanian Abstract - Privilege Escalation is a major issue for securing Linux sys tems. When a user gains unauthorized root access he has the ability to access all system resources and manipulate them at will. In the past, Linux has used Static Access Control Policies and User Space Monitoring Tools to secure system access. However, these methods provide little in sight into how the kernel is modifying users credentials when permissions are changed. In this paper we propose a Kernel-Level solution to detect and prevent unauthorized privilege escalations. This detection/ preven tion occurs in real time via a Credential Transition Monitoring Mecha nism within the kernel layer, which prevents the elevation of privileges by illegal means. To create the functionality necessary for the above, a Linux Kernel Module (LKM) was created which utilizes kprobes to in tercept calls to the commit creds() function, which is used to update a processes credentials in the kernel. To evaluate if the privilege escalation being requested is legitimate or malicious, the LKM contains a Policy Based Evaluation Mechanism which evaluates each request to modify a process’s credentials. We tested our proposed solution using a con trolled test environment composed of a Virtual Machine (VM) running the Ubuntu Operating System. We ran two types of tests, first were Le gitimate Administrative Operations utilizing the ”sudo” utility, second were Simulated Privilege Escalation Attacks based upon SetUID Vul nerabilities. Our results show that the system effectively detected and blocked malicious privilege escalations, while providing minimal over head to normal system operation.
Authors - Noel Milliones, Vicente Pitogo, Mark Phil Pacot Abstract - The sensitive information in the healthcare industry along with the increasing phe nomenon of the use of intelligent health-related devices makes it a very difficult task to ensure the privacy of patients as well as carry out precise analysis. The centralized methodology in cur-rent machine learning models requires the exchange of raw information of patients from different healthcare institutions and health related devices to the centralized computer system through the network. However, due to the privacy issues and network traffic issues in this methodology, the proposal proposes the development of a privacy-preserving health analytics platform. Here in this proposed methodology, every healthcare center as well as health-related device has its own local machine learning model without transferring even a single piece of information outside. However, the models also employ disease-specific models including CNN heart diseases models of 95 percent accuracy, Gradient Boosting Classifier Diabetes models of 93 percent accuracy models, along with SVM models of liver diseases along with 96 percent GridSearch models. Each edge device carries out the data preprocessing for the local environment, as well as the processes of model training and the transmission of secure updates, in such a way that the sensitive patient data has never left the environment. The platform presented proves the idea that edge computing and collaborative learning can lead to scalable and secure healthcare analytics with high predictive performance.
Authors - Etambuyu Akufuna, Mayumbo Nyirenda, Ruth Wahila, Marjorie kabinga Makukula Abstract - As the primary cause of death worldwide, cardiovascular disease (CVD) necessitates accurate early detection methods. We provide a machine learning approach for predicting heart illness using clinical health data that is enabled by the Internet of Things. An SVM classifier that was trained using 14 Cleveland Heart The disease dataset separates patients at high risk from those in good health. Preprocessing, feature standardisation, and GridSearch Cross-Validation hyperparameter optimisation are all included in the workflow. The model outperforms a number of benchmark techniques in the literature with an accuracy of 93.33% and an AUC of 0.97. A scalable and comprehensible basis for IoT-based clinical decision assistance is confirmed by comparative outcomes.