Authors - Pratham Vasa, Amishi Desai, Chahel Gupta, Avani Bhuva, Mohini Reddy Abstract - Content Delivery Networks (CDNs) play an essential role in enhancing the content delivery speed by caching frequently requested data in edge servers distributed across geographical regions. Traditional CDNs utilize rule-based pol icy and machine learning approaches for optimizing the cache. Machine learning is performed centrally, and the cache optimization is performed using the traffic logs collected by the central server. Although the use of central learning ap proaches is beneficial, it poses certain limitations, including data privacy and high communication cost. The central learning approach aggregates raw data, which poses data privacy issues. This paper proposes an architecture for secure federated learning, which is utilized for cache hit prediction in CDNs. The proposed archi tecture is evaluated using a synthetic dataset containing 1,30,548 records, and the features include temporal and network features. The proposed architecture is com pared with the traditional central learning approach, and the results reveal that the secure federated learning model achieves an accuracy of 70.15%, which is com parable to the central learning approach. The proposed architecture is found to reduce data privacy exposure by 30%.
Authors - Syed Shanika Zaida, Kamineni Leela Tapaswi, Kilari Dhana Malikarjuna Rao, Adarapu Sandeep, Amar Jukuntla Abstract - Removable USB storage devices are widely used in day-to day computing, but they also introduce risks such as unauthorized data transfer and misuse of external media. Understanding how these devices are used on a system is important during forensic investigations, espe cially when analyzing potential data leakage incidents. On Windows sys tems, traces of USB activity are not stored in a single location. Instead, they are distributed across registry entries, system logs, and file system records. Examining these sources individually often makes it difficult to form a clear picture of events. This paper introduces a forensic frame work that brings together USB-related artifacts from multiple system components and analyzes them in a unified manner. The method gath ers data from sources such as registry entries, Plug-and-Play logs, and f ile system structures, and then aligns them based on their timestamps. A Python-based implementation is used to automate this process and to relate device connection events with file operations. Experiments con ducted on a Windows setup show that the framework can identify device usage and reconstruct the sequence of related activities with clarity. By combining evidence into a single timeline, the approach helps simplify analysis and supports consistent interpretation of results.
Authors - Sanchi Mahajan, Nandini Jain, Evangelin G, Jansi K R, Shivam Shivam Abstract - The issue of efficient work planning in heterogeneous multi-cloud in frastructures is still an open issue due to scalability limitations, data privacy, and latency sensitivity. The conventional centralized scheduling approach requires data aggregation, which is associated with critical privacy challenges and com munication cost. The proposed work aims to design a privacy-preserving feder ated multi-cloud task scheduling framework for smart mobility applications to overcome the limitations of conventional approaches. The proposed framework employs a decentralized scheduler for separate cloud regions. The proposed framework employs a novel task abstraction approach to transform real-time traffic data into task-scheduling forms. The proposed framework eliminates the requirement to communicate raw traffic data by employing a federated learning based aggregation approach. The proposed framework employs a federated ag gregation approach, which is associated with scalability, routing, and multi cloud coordination while ensuring data locality. The proposed framework is evaluated by conducting experiments on Random, Rule-Based, Local-ML ap proaches using a Smart Mobility dataset. As can be observed from the results, considerable reductions in communication overhead and privacy leakage are achieved with the preservation of competitive execution latency and SLA com pliance. The strategy has been observed to scale well with an increase in cloud regions, as the communication scalability results indicate. It is the ability to sup port federated, scalable, and privacy-aware job scheduling for smart traffic sys tems without central data sharing that makes this work interesting.
Authors - Thota Neha, Napa. Sai Gopi, R. Aarthi Abstract - The increasing realism of deepfake media has raised signifi cant concerns regarding the authenticity of digital content. Most existing detection methods rely on audio–visual fusion, which often introduces ad ditional complexity and may degrade performance when one modality is unavailable or unreliable. This work presents a dual-stream deep learning framework that pro cesses audio and video independently, avoiding explicit fusion. The au dio stream employs a CNN–BiLSTM model on log-Mel spectrograms to capture temporal and spectral artifacts, while the video stream uses EfficientNet-B0 with BiLSTM to model spatial inconsistencies and tem poral variations in facial sequences. Experiments conducted on multiple benchmark datasets, including ASVspoof 2019, WaveFake, LJSpeech, FaceForensics++, and Celeb-DF (v2), demon strate that the proposed approach achieves competitive detection perfor mance. In addition, the framework maintains robustness under missing modality conditions and offers improved interpretability compared to fusion-based methods. These results indicate that independent modality-specific learning pro vides a practical and effective alternative for deepfake detection in real world scenarios.
Authors - Ankit Podder, Piyush Ranjan Das, Soham Acharya, Ayushmaan Singh, Soumitra Sasmal, Partho Mallick Abstract - Static perimeter-based security architectures are now inef fective in the current threat scenario. The ability of attackers to obtain legitimate credentials and the presence of zero-day exploits often cause real-time breaches of the network perimeter. An area of concern is the real-time monitoring of these systems. In the current scenario, security monitoring is performed in a segregated manner, where network analysts analyze time-stamped network logs and identity analysts analyze time stamped login attempts, without cross-referencing in real time between these two domains. The proposed solution is a fusion platform capable of ingestion of raw network transport data and real-time human element monitoring data. This is achieved through the integration of two dif ferent threat detection mechanisms using a FastAPI backend. The first threat detection system will be the Network Threat Detector (NTD), im plemented in Python and using the Scapy library to parse deep packet data in real time for flow analysis. The second threat detection system will be a JavaScript tracker designed for monitoring digital behavioral indicators and calculating real-time metrics such as mouse velocities, ac celerations, kinematic jerk, and typing speeds. Real-time monitoring will be achieved through a machine learning framework with three different modules for inferring user intent using the Random Forest algorithm, detecting anomalous statistical patterns using the Isolation Forest algo rithm, and detecting malicious plaintext syntax using Logistic Regres sion. The system has been tested in a lab scenario and has been able to classify user session states into four different states: Engaged, Con fused, Frustrated and Suspicious with accuracy exceeding 95%. These digital behavioral indicators will be fed into the Network Transport Data (NTD), allowing the computation of a real-time risk score.
Authors - Lavu Uha Saranya, T.V.S.S. Reddy, I.V.M.K. Sarma, Dipesh Kumar Kushwaha, T.N.V.D. Sai Krishna Abstract - Digital Forensic investigations have typically focused on the identification of private browsing at the application layer using artifacts from memory and disk, as well as the fact that modern browsers rely extensively on the operating system for fundamental capabilities such as rendering, input processing, and networking. This paper extends the forensic scope by demonstrating that session Data related to private Sessions remain in shared Subsystems of the OS in Volatile Memory. In particular, This paper examines the three primary components of the linux desktop environment: the display compositor (GNOME shell); the Input Pipeline (IBus Daemon); and the network resolver (systemdresolved). utilizing physical memory acquisitions via LiME on an ubuntu 25.04 System, This paper monitored the migration of high entropy inputs across these subsystems. The results of this research indicate that critical session data including: Window metadata associated with wayland sessions; Plaintext keystroke data received through D-Bus; and fallback queries made via DNS-over-HTTPS were found to remain in OS Managed Memory for extended periods of time after the conclusion of the private browsing session. The author provides a reproducible framework for analysis of memory associated with the OS level and demonstrates that browser based privacy controls are structurally insufficient to fully sanitize volatile memory.
Authors - Venkata Saikumar Thalupuru, Shubham Kumar, Santhoshini Pranathi Singaraju, Vishal Gupta Abstract - As the use of online banking and digital payments grew faster, that has also left the institution at risk of becoming the victims of credit card fraud, which has become a major challenge for traditional banks and other financial institutions. This huge discrepancy in transaction datasets is one of the greatest challenges in fraud analytics wherein only the rare fraudulent activity takes up a tiny fraction of the total transaction. Traditional machine learning models are often quite accurate but not great at detecting occasional frauds. To overcome this limitation, this study proposes a cost-aware hybrid framework comprising Attention-based Long Short-Term Memory (Attention-LSTM) and ensemble-based machine learning. This method will take care to preprocess the data, maintain balance among classes using SMOTE, select features based on mutual information by leveraging a soft-voting ensemble of the Logistic Regression, Random Forest, and the XGBoost models. Cost-aware learning is coupled with decision threshold enhancement to minimize false negative predictions. Additionally, SHAP-based explainability is added on top for enhanced transparency and interpretability of the model. The experimental results show 99.3% accuracy, 0.905 precision, 0.892 recall, 0.898 F1-score, and 0.98 ROC-AUC, indicating that our new framework is effective in detecting genuine financial fraud.
Authors - Ismail Suleiman, Dinesh Reddy Vemula, Abhaya Kumar Pradhan Abstract - This paper presents the evaluation and demonstration phases of a Design Science Research Methodology (DSRM) study that produced the Organisational Security Culture Framework (OSCF) for Namibian Public Enterprises. An empirical needs assessment established a three-tier security culture maturity deficit: a 40% policy awareness gap; a widespread misconception among non-IT staff that cybersecurity is solely an IT responsibility; and a training gap in which 25% of staff had received no formal security training in the preceding year. The OSCF comprises five interrelated components: Risk Assessment, Security Policy and Enforcement, Security Compliance, Training and Awareness, and Ethical Conduct. Demonstration was executed across four staged phases: baseline assessment, component testing, pilot integration, and full-scale deployment. Evaluation employed a dual approach: expert panel review against eight criteria and Key Performance Indicator (KPI) measurement across five strategic objectives. Results confirm that the OSCF closed the 40% policy awareness gap, achieving 95% staff awareness post-implementation, and significantly reduced phishing susceptibility. Seven evidence based refinements evolved the OSCF from a static policy model into a continuous security culture maturity loop. The framework’s modular, tiered architecture supports long-term sustainability of behavioural change and scalable deployment across organisations of varying cybersecurity maturity, including federated multi-institutional environments.