Authors - Kunam Subba Reddy, Mangavarapu Jahnavi, Kotte Hima Teja, Shaik Kathamma Basheerun, Nama Adarsh Abstract - Proper estimation of battery state of charge (SOC), state of health (SOH) and state of power (SOP) are vital to safe and efficient operation of photovoltaic (PV)-battery energy storage systems, particularly at highly dynamic profiles in which both charging and discharging is taking place. In this paper, a clear comparative analysis of classical, model-based, and machine-learning-based estimation techniques is performed in terms of a similar 24-h ultra-challenging PV + load current profile, simulating realistic residential microgrids operation. The test profile involves strong directions of current swings, partial charging, and long constant-power discharge, and temperature change. The estimators are implemented and benchmarked such as open-circuit-voltage (OCV)-based SOC estimation, linear Kalman filter (KF), extended Kalman filter (EKF), unscented Kalman filter (UKF), basic machine learning (ridge regression) and support vector machine (SVM). Each of the methods is compared to a high-fidelity model of an electro-thermal battery with capacity degradation and resistance increase with age and temperature. The accuracy of SOC tracking, SOH estimation error, and SOP tracking capability are discussed. It is found that the OCV-based method fails when the loading is dynamic which makes the SOC about constant and SOP highly conservative. KF and EKF are much better SOC trackers but they have greater deviation at high SOC and near bottom-of-discharge. ML and SVM estimators based on Ridge-regression show high SOC accuracy in the entire profile and UKF shows the best overall trade off between SOC accuracy, SOH tracking as well as SOP estimation strength when resistance varies with temperature and with age. The paper discusses the relevance of the collaborative consideration of SOC, SOH, and SOP and shows the advanced filters and ML models can significantly increase the performance of PV-battery applications that require tough operating conditions.
Authors - Asmit Patil, Smita Shedbale, Sneha Kumbhar, Ashwini Athawale, Smita Arude, Rohan S. Sapkal, Priya Sharma, Dhanaraj Jadhav Abstract - The rapid growth of Internet of Things (IoT) deployments in 5G Enhanced Ma-chine-Type Communication (eMTC) networks has significantly increased the network at-tack surface. A major challenge for Network Anomaly Detection Systems (NADS) in this environment is severe class imbalance, where dominant benign traffic obscures rare but high-impact attacks, leading to poor minority-class detection. This paper presents Conf-Gate XGBoost-RF, a two-stage hybrid anomaly detection architecture designed to address this limitation without compromising real-time performance. The framework employs a high-speed XGBoost classifier for initial screening and a confidence-gated mechanism that selectively routes low-confidence predictions to a specialist Random Forest trained on synthetically balanced data. Evaluation on the large-scale CICIoT2023 dataset shows that the proposed model achieves 99.32% accuracy and a Macro F1-score of 0.80, sub-stantially outperforming single-stage baselines. Notably, recall for critical low-volume at-tacks, such as Command Injection, improves by over 34%. With an average inference latency of 0.87 ms, the proposed approach remains compatible with the stringent low-latency requirements of 5G eMTC control signaling, demonstrating a practical balance between computational efficiency and rare-attack sensitivity.
Authors - Nguyen Thanh Minh Tam, Mai Nhu Yen, Nguyen Quang Huy, Nguyen Thi Nhung, Nguyen Thi Huyen Chau, Nguyen Hoang Phuong, Dong Van He, Bui Xuan Cuong, Vladik Kreinovich Abstract - The rapid emergence of smart apps in smart environments, industrial automation and cyber-physical systems has demonstrated the intrinsic limitations of conventional design of information and communication technologies. The existing ICT systems are very rigid, centrally controlled and dependent on fixed operation logic and this restricts their dynamism to changing environments and complex system dynamics. Intelligent systems are already urgently in need of architectural paradigms that offer self-education, decentralized intelligence, and autonomous scale-based decision making. The following paper proposes a next-generation Edge-Cloud-AI integrated ICT architecture, which is supposed to service self-learnings and autonomous intelligent systems. The proposed architecture gives a layered intelligence design that utilizes edge level real-time learning, cloud level global optimization, and an autonomy orchestration layer balancing the adaptive behavior in the distributed elements. The architecture enables systems to create operational policies in an autonomous way through the provision of continuous learning feedback and autonomous controls directly within the ICT infrastructure although still be scaled and be reliable. Significant contributions of this work are the definition of a single Edge-Cloud intelligence system, the incorporation of self-learner’s mechanisms along structural layers, and an autonomy-based orchestration model that may be applied to diverse fields of intelligent systems. The proponent architecture is not platform specific and can be applied in a wide range of future intelligent applications that may require resilience, versatility and autonomy.
Authors - Nethika Alagarathnam, Dhanushka Jayasinghe, WU Wickramaarachchi Abstract - Social media platforms, especially Twitter, have become trending sources for public health monitoring, as individuals often share personal experiences related to symptoms, diagnoses, and health concerns. However, detecting personal health mentions (PHMs) in such noisy, short text environments remains challenging. This study investigates about evaluating and comparing three neural architectures including Long Short-Term Memory with word embeddings, a fine tuned Bidirectional Encoder Representations from Transformer model (BERT) and a compact TinyBERT model distilled from BERT. Using a labeled corpus of health related tweets, all models were trained under identical preprocessing, optimization, and evaluation conditions with accuracy, precision, recall, and F1- score assessed on a test set. The results reveal clear performance differences across three architectural paradigms. LSTM baseline demonstrated strong learning on the training set but found significant overfitting and failed to perform on unseen data. But in contrast, the transformer models BERT and TinyBERT delivered a decent balanced performance reflecting the good ability to capture contextual semantics noise in tweets. While BERT achieved the highest overall performance. Notably, TinyBERT provided a competitive and alternate suite for deployment in constrained environment. These findings highlight the effectiveness of transformer architectures for Personal Health Mention detection and practical insights for building efficient and accurate public health monitoring system using social media data.
Authors - Michael David, Chekwas Ifeanyi Chikezie, Abra-ham Usman Usman, Sulieman Zubair, Henry Ohiani Ohize, Joseph Ojeniyi Abstract - With the rapid growth of digital communication and multimedia applications, secure transmission and storage of digital images have become critical challenges. Conventional text-based encryption algorithms are often inadequate for image data due to its high redundancy, strong pixel correlation, and large data volume. These characteristics necessitate specialized encryption mechanisms that can provide strong security while maintaining computational efficiency. This paper proposes a robust image encryption framework designed to ensure confidentiality, resistance to cryptographic attacks, and suitability for real-time applications. The proposed approach integrates advanced permutation–diffusion operations with chaos-based key generation to effectively disrupt statistical characteristics inherent in digital images. Chaotic maps with high sensitivity to initial conditions are employed to generate dynamic encryption keys, enhancing key space complexity and resistance against bruteforce and differential attacks. Pixel-level scrambling is combined with nonlinear diffusion operations to eliminate spatial correlations and achieve uniform cipher text distribution. The encryption process is further optimized to support grayscale and color images while preserving computational feasibility. Extensive experimental evaluation is conducted using standard benchmark images to assess security and performance. Statistical analyses, including histogram uniformity, correlation coefficients, information entropy, NPCR, and UACI metrics, demonstrate strong resistance against statistical and differential attacks. Key sensitivity and key space analysis confirm robustness against bruteforce attempts. Performance results indicate that the proposed scheme achieves a favorable balance between security strength and execution efficiency, making it suitable for real-time image protection. The experimental findings validate that the proposed image encryption framework provides enhanced security, scalability, and practicality, offering an effective solution for secure image communication in modern digital environments.
Authors - Vinod B. Maniyat, Arun Kumar B. R, Shreyas A Abstract - Stealthy rogue components pose as legitimate nodes and progressively deteriorate services, take over flows, or taint network topology, posing serious scalability and security threats to modern SDN networks. High false positive rates, poor interpretability, and static threat assumptions make it difficult for current rule-based and signature-driven detection systems to detect such contextual threats. Based on the Dynamic Containment Score (DCS), a mathematically modelled, context-sensitive metric that measures each network node’s disruptive potential, this work offers a comprehensive behavioural defence paradigm. The framework integrates graph theoretic features, protocol specific entropy, and temporal volatility to compute real time DCS rankings, refined through SHAP based explainability and confidence bounded feature attribution for adaptive detection under concept drift. A multi strategy containment engine, including deception based mitigation, redirects attackers toward synthetic vulnerabilities. Validation on hybrid real world and adversarial traffic demonstrates superior early detection, explainable risk attribution, and efficient mitigation with minimal service disruption.
Authors - Saurabh Nimje, Anup Bhitre, Sudhir Agarmore, Utkarsha Pacharaney Abstract - Insider threat is a great danger to business security because of trust rights granted insiders, it is not easy to notice their malicious or careless work through the available security programs. This paper is going to examine how Natural Language Processing (NLP) can be used to detect insider threats proactively by analyzing the communication of employees such as emails, chat messages, and internal reports. Applying the CERT Insider Threat Dataset and simulated logs, a multi-level system was created, which includes text preprocessing, feature extraction based on sentiments and semantics, and classification of machine learning models- Random Forest Mean Square Error, SVM, and LSTM. Out of them, LSTM model performed best (92.6% accuracy and overall performance) since it was able to capture contextual and sequential patterns of communication. The most notable indicators of behaviors were sentiments of negativity, passively aggressive language, and frequency of communication efficiency, which indicated a high relationship with insider threat. SHAP (Shapley Additive Explanations) was also used in the given research to allow enhancing insights into model decisions. The results prove the viability of NLP-based solutions as scalable, context-sensitive, and explainable systems to detect insider threats extending the understanding of organizations to perceive behavioral anomalies and reduce the risks to a minimum.
Authors - Maria Jihan Sangil, John Raymond Barajas, Ramnick Lim Abstract - Government Identity documents have become groundwork for citizen verification, financial inclusion and public service. However unauthorized dis-closure, fraudulent access and frequent misuse of individual's personal information expose gaps in routine verification and wrongful disbursement of welfare benefits which asks the immediate need for a more secure privacy holding approach. Existing infrastructure like DigiLocker takes care of the secure transmission but a factor of privacy exposure is still compromised. The pro-posed model introduces TrustChain based on blockchain framework for decentralized identify verification and secure access. The inducement shifts focus from submission of identity information to authentication by the means of DID (Decentralized Identifiers) ensuring that some personal information will be hid-den to the document requestor. By integrating self sovereign identity principles, distributed storage and cryptographic operations the model enables users to au-thenticate without revealing personal parameters minimizing the risks of identity compromise. Pilot findings are particular not to mention that identity expo-sure is mitigated by such a representation and offers scalability towards integration into the current infrastructures such as Aadhaar connected services and Digital locker. A safe identity space where individuals retain ownership to per-sonal information as organizations and governments achieve acceptable validation.
Authors - K. Subba Reddy, Pasulammagari Jahnavi, Devasam Hema Keerthana, Katreddy Jaswanth Reddy, Dudhekula Abdul Gaffar Abstract - The investigation of events that have taken place is the foundation of the current law enforcement that uses surveillance videos extensively to identify suspects and the motives of criminals. Wherein, it becomes slow, tiresome, and liable to error when video data is being monitored manually due to the enormously large volumes of data being monitored. To cope with this, it is a need to witness a transition toward the use of automated technologies that are led by AI and deep learning. These systems are able to detect faces, masks, gaits, and ab-normal behavior systematically in adverse environments. The survey will re-view the information about each of the methods that involve face recognition using videos, gait analysis and anomaly detection systems. In addition, efforts are put in direction to present a standardized AI-based surveillance framework of legal multimodal multitask biometric and behaviour identification. The system proposed will focus on a radical reduction of false alarms and offer adequate and prompt intelligence to the law enforcement agencies to speed up investigations.
Authors - Keerin Nopanitaya, Luo Xiaoyu, Zhu Chunping, Pratya Nuankaew Abstract - This study examines Generative AI use and ethical guidelines in graduate education at Payap University, Thailand. As large language models increasingly support learning, research, and academic writing, they boost efficiency but raise concerns about accuracy, transparency, and integrity. Using mixed methods, the study gathered questionnaire data and conducted interviews and focus groups with master’s and doctoral students. Results show broad AI use for literature reviews, writing, idea generation, and research, with more advanced use expected to grow. While students report moderate to high skills, many lack strong critical evaluation of AI outputs and practical understanding of ethics. Consistent with international research, key risks stem from limited AI literacy, unclear disclosure, and lack of oversight rather than the technology itself. The study recommends developing an AI literacy framework, clear disclosure standards, and process evaluation for ethical, responsible AI integration while protecting academic quality and integrity.