HOD, Department of Computer Application & Assistant Professor - Department of Computer Engineering, B.H.Gardi College of Engineering & Technology, Gujarat, India
Authors - Md Mahmudul Hoque, Md Kawser Islam, Md. Mamunur Rahman Moon, Abdullah Rakib Akand, Md. Hadi Al-amin, H.M. Azrof Abstract - The automatic recognition of virus particles in transmission electron microscopy (TEM) images remains a demanding task, primarily owing to strong inter-class similarity, scale variability, and pronounced class imbalance. In this study, several convolutional neural networks and transformer-based architectures were comparatively evaluated for the classification of 22 virus categories using the TEM virus dataset. All models were trained under identical preprocessing and optimization conditions, and imbalance effects were mitigated through a weighted crossentropy formulation. Performance was quantified using overall accuracy together with macro-averaged precision, recall, and F1 score. Among standalone models, the Swin Transformer achieved the highest accuracy (0.8831) and macro-F1 score (0.8444), followed by DeiT (accuracy 0.8669). Convolutional architectures exhibited comparatively lower balanced performance, with ResNet50 demonstrating substantial degradation (accuracy 0.5887) under imbalanced conditions. To exploit complementary representational properties, decision-level hybrid strategies were implemented. The performance-weighted hybrid attained an accuracy of 0.8831 and the highest macro-F1 score (0.8528), slightly surpassing the equal-weight hybrid configuration. These observations indicate that architectural heterogeneity contributes to improved inter-class balance without sacrificing overall predictive accuracy. Future work may explore scale-aware representations, feature-level fusion mechanisms, and expanded TEM datasets to further enhance robustness and generalization in virus identification tasks.
Authors - SunilKumar Ketineni, Preethi Kandukuri, Hruthik Sreeramaneni, Vivek Bojjagani Abstract - Phishing continues to pose a serious threat to digital security by ex ploiting human vulnerabilities to steal confidential data through deceptive online interactions. Traditional detection methods often fall short in identifying advanced phishing strategies. This survey presents a comprehensive overview of phishing detection techniques, with a strong focus on modern, multi-layered machine learning and deep learning-based solutions. The proposed layered framework includes four key stages: data collection and preparation, model training, detection and prediction, and explainability. In the first layer, email, URL, and metadata are collected and preprocessed for feature extraction. The second layer involves model training using both machine learning classifiers such as Random Forest, SVM, Naïve Bayes, and KNN and deep learning archi tectures like CNN, RNN, and LSTM. These models feed into the third layer where phishing is detected and classified. Finally, the fourth layer integrates Explainable AI (XAI) methods like LIME, SHAP, and Anchors to enhance model transparency and interpretability. This survey evaluates the effectiveness and limitations of each layer and highlights the need for explainable, scalable, and adaptive phishing detection systems.
Authors - K.Poorani, K Karan, R Seenivasan, V Ramkumar Abstract - Older email detection technologies have struggled to accurately iden tify malicious emails in the face of the latest techniques attackers use to compro mise victims. While modern solutions perform well in detecting malicious emails, they are not completely foolproof. As a result, malicious emails can still reach a user’s mailbox, necessitating measures to reduce potential harm. This study suggests transforming the decision-making processes of recent algorithms into a white-box model, enabling transparency in decision-making through Ex plainable AI. This is achieved by having the proposed model compute confidence level scores for each email, which users can use to exercise caution if a malicious email slips into their inbox.
Authors - Nazura Javed, Rida Javed Kutty, Muralidhara B L Abstract - The increasing availability of online information has made it easier to access diverse sources, but it has also introduced challenges in verifying the reliability and consistency of content. Conflicting statements across different sources often contribute to misinformation and make it difficult to establish factual accuracy. This study focuses on the problem of cross-document contradiction and inconsistency detection as a step toward improving fact verification in textual data. A two-stage pipeline is proposed in which semantically related sentence pairs are first retrieved from documents discussing the same event and then analyzed using Natural Language Inference (NLI) techniques to determine whether they express contradictory information. In contrast to conventional sentence-level contradiction detection, the proposed approach emphasizes document-level comparison to identify inconsistencies across independent sources. Two pre-trained transformer models, DistilBERT (DistilBERT-base-uncased) and RoBERTa (RoBERTa-base), are used for contradiction classification. The approach is evaluated on the SNLI dataset and the PHEME Rumor Dataset, which are widely used benchmarks for NLI and misinformation research. Experimental results show accuracies of 94.50% (F1 score 94.50%) on SNLI and 92.39% (F1-score 92.31%) on PHEME, indicating that the proposed framework is effective in identifying contradictions and supporting cross-document fact validation.
Authors - B.Purnachandra Rao, Gaurang Jinka Abstract - Distributed systems rely on data replication across multiple nodes to ensure high availability, fault tolerance, and scalability. While replication improves system reliability, it also introduces temporary inconsistencies between primary and replica nodes during data propagation. This phenomenon, commonly referred to as consistency drift, occurs when distributed nodes maintain slightly different states before synchronization is completed. As distributed infrastructures grow in scale and complexity, consistency drift becomes increasingly significant due to network latency, workload variability, and communication overhead between nodes. Traditional synchronization mechanisms typically rely on static replication intervals or fixed update propagation strategies that do not adapt effectively to dynamic system conditions. Such approaches may allow drift to accumulate before synchronization occurs, resulting in delayed consistency and inefficient resource utilization. Managing consistency drift therefore becomes a critical challenge in distributed computing environments where maintaining accurate and synchronized data states is essential. This research addresses the problem of consistency drift in distributed systems by examining the factors that contribute to state divergence among nodes and exploring mechanisms for dynamic drift management. The proposed framework focuses on monitoring system behavior, including workload intensity, network latency, and node communication patterns, to regulate synchronization behavior more effectively. By enabling adaptive synchronization strategies that respond to real time system conditions, the framework aims to reduce drift accumulation and improve overall data consistency across distributed clusters. Effective management of consistency drift ultimately enhances system reliability, operational stability, and performance in modern distributed computing platforms operating under dynamic workloads.
Authors - Suganya Moorthy, Jayakumar Kaliappan Abstract - Internet of Things (IoT) networks have grown really fast, which has increased the attack surface of cyber attacks by a big mar gin. However, the severely limited computational resources, the hetero geneous architecture, and incomplete or decentralized communications make the IoT environments very susceptible to intrusion attacks, in cluding Distributed Denial of Service (DDoS), spoofing, botnets, and data exfiltration attacks. Older signature-based intrusion detection sys tem (IDS) is not effective in detecting zero-day and dynamic threats. The paper will present a new machine learning-based intrusion detection system, which was developed with IoT networks in mind. The design proposed combines the characteristics of feature search, feature detec tion, and group classification model in order to increase the accuracy of detection as well as reduce the number of computations. Benchmark IoT intrusion datasets that have undergone experimental evaluations prove to be more effective in detection accuracy, false positive rates and scaling than the traditional IDS frameworks. Practical constraints that include the computational overhead of resource-constrained IoT devices, imbal ance of the dataset, and interpretability of the model are addressed. The directions of future research are lightweight federated learning systems, explainable AI system incorporations, and real-time adaptive threat in telligence systems to build better resiliencies of IoT security.
Authors - Konstantina Karathanasopoulou, Ioannis Vondikakis, Dimitris Georgiadis, George Dimitrakopoulos Abstract - Digital signatures are fundamental public-key cryptographic primitives used for message authentication and integrity. A message’s recipient must be able to validate that it comes from the reported sender and hasn’t been altered by anybody else. Pairing-based cryptography provides elegant and efficient mechanisms for constructing compact dig ital signature schemes. Inspired by isogeny structures on elliptic curves, we present a pairing-based digital signature system in this study. Our construction targets classical security settings and is analyzed under standard computational hardness assumptions related to bilinear groups and isogeny-based mappings. We demonstrate that the proposed ap proach attains “existential unforgeability under adaptive chosen-message attacks (UF-CMA)” within the random oracle model and address the construction’s soundness and security. Moreover, the scheme offers com pact public key and signature sizes, making it suitable for lightweight cryptographic applications.
Authors - Nirmaladevi J, Kanishka R, Kirthiga B, Lathikasri T R, Ranjani Shree R S Abstract - The vast implementation of cloud computing has uplifted the modern IT practices by improving scalability, flexibility, and budget efficiency. In contrast, there has been an increase in energy consumption, which results in carbon emissions. This happens because of overusage, overconsumption, overprovisioning, unused capacity, and inefficient data center management. These days, data centers act as the sole contributor to global greenhouse gas (GHG) emissions; therefore, sustainable cloud operations are essential in addressing this challenge. GreenOps, or green operations, defines the cloud deployment and operational practices that take place but also considers the environmental impact; it depicts energy-efficient infrastructure design, optimized resource usage, virtualization, and the integration of renewable energy resources. This survey presents a summary of green cloud computing, including the current trends, challenges, energy-aware scheduling algorithms, and optimization techniques for obtaining energy-efficient cloud deployment.
Authors - Pranaav Contractor, Sanika Ajgaonkar, Nishanth Ravichandran, Satishkumar Chavan Abstract - This paper examines the interplay between demographic factors and a newly developed behav ioral construct—modern investment curiosity—and how these elements collectively shape finan cial behaviors among higher education faculty. Drawing from survey responses of 145 educators situated in Kollam District, Kerala, India, the study applies descriptive statistical techniques alongside chi-square tests to evaluate four research hypotheses. The data reveals a predominantly risk-averse financial posture among participants, with post-retirement security ranking as the foremost financial goal and bank deposits serving as the dominant investment channel. Statistical testing shows no meaningful relationships between saving patterns and either household size or disability status. A statistically significant positive association emerges between investment cu riosity and ownership of equity or mutual fund products (χ² = 8.40, p < 0.01). Additionally, mar ital status demonstrates a significant relationship with investment curiosity (χ² = 5.28, p < 0.05), where unmarried faculty report higher curiosity levels. These observations are consistent with established frameworks including the Life-Cycle Hypothesis and the Theory of Planned Behav ior, positioning investment curiosity as a relevant psychological factor in financial decision-mak ing. The paper offers practical suggestions for institutional programming and identifies avenues for subsequent scholarly inquiry.
Authors - B.Purnachandra Rao, Gaurang Jinka Abstract - Distributed systems rely on data replication to ensure availability, fault tolerance, and scalability across multiple nodes in modern cloud environments. Replication enables systems to maintain continuity even when individual nodes fail or experience network disruptions. However, replication often introduces synchronization delays between primary and replica nodes, known as replication delay. These delays can cause temporary data inconsistency, stale reads, and increased response latency, degrading application performance and user experience. As infrastructures scale to larger clusters, communication overhead, network latency, and workload variability further amplify replication delays, making efficient synchronization increasingly challenging. Traditional replication mechanisms typically rely on static synchronization intervals or sequential update propagation strategies. These approaches fail to adapt to dynamic network conditions and fluctuating workloads, resulting in inefficient data propagation and delayed consistency across nodes. In large scale systems, such limitations may cause bottlenecks, reduced reliability, and inconsistent states during high workload periods or network congestion. Addressing replication delay is critical for maintaining reliability and consistency in distributed environments. Recent research emphasizes intelligent synchronization mechanisms capable of adapting to changing conditions. Adaptive synchronization strategies that monitor network latency, workload intensity, and node communication patterns offer improvements in replication efficiency. By enabling replication decisions that respond dynamically to system behavior, such approaches reduce synchronization delays and improve data consistency across clusters. Enhanced replication efficiency ultimately strengthens reliability, scalability, and operational performance in modern distributed computing platforms operating under variable workload conditions.
HOD, Department of Computer Application & Assistant Professor - Department of Computer Engineering, B.H.Gardi College of Engineering & Technology, Gujarat, India