Authors - Chaitra Sai Chakravarthi Ganapaneni, Rishik Reddy Cheruku, Venkata Karthik Chamarthi, Venkata Sasidhar Kommu, Malathi P Abstract - Academic websites function as institutional interfaces connecting universi-ties with multiple stakeholder groups. Many institutions face challenges in developing web presences that address usability, accessibility, and stakeholder needs simultaneously. Existing frameworks address isolated dimensions without providing integrated guidance. This research proposes a conceptual design framework for academic websites that integrates Web Con-tent Accessibility Guidelines (WCAG) 2.1 Level AA standards with Nor-man's design principles. The framework consists of four core segments (In-terface Design, Content Accessibility, Technical Performance, User Experience) and four modular add-ons categories (Career and Job Opportunities, Student Projects Showcase, Alumni Community, Industry Collaboration). Framework validation employed dual evaluation methods to ensure both conceptual soundness and stakeholder relevance. Expert judgment assessment (n=5) achieved complete agreement on conceptual soundness. Quantitative user assessment (n=450) across six stakeholder groups showed that framework components achieved good performance levels (mean scores 3.58 to 3.70) and add-ons features received high priority classifications (mean scores 3.62 to 3.80). The framework contributes systematic integration of accessibility standards with design principles and provides guidance for institutions developing academic websites.
Authors - Amulya Saxena, Pratibha Joshi, Adwitiya Sinha Abstract - Global food security and hunger mitigation is one of the major challenges ahead of us. The global population specifically from underdeveloped countries are quite vulnerable to climate change and its impact in abnormal weather conditions and related bad crop leading to food shortages. In today’s globalised world, where a disruption in food supply chain has its own impact on potentially everyone in the planet is a mounting challenge to surpass. The advent of Artificial Intelligence, specifically Computer Vision techniques prove to be extremely helpful in identifying the data pattern of the images of the cultivated land, its anomalies and is insightful in giving the challenges of farming such as affect of bad weather, bad crop prediction, crop distribution etc. The availability of high-quality geospatial data from the satellites such as Sentinel 1/2, Landsat is extremely helpful for advanced ML techniques to provide timely predictions so that a corrective action can be taken in time. This study focuses on an AI-driven approach that predicts land where Rice will be produced vs. no crop land using satellite optical data and its variates, radar logs, weather data and location information.
Authors - Arin Bansal, Pranshu CBS Negi Abstract - The research provides a description of WaveTrust, which is a trust-conscious and energy-efficient routing protocol that is applied to Underwater Wireless Sensor Networks (UWSNs) based on reinforced Q-learning and trust assessment. Neutral trust and network deployment initiate the protocol. During the process of routing data in real time, monitoring of the behavior by the nodes is required with respect to four metrics namely Packet Forwarding Ratio, Energy Behavior Consistency, Latency Observance and Link Quality Indicator. The calculation of the trust is performed according to the direct and indirect observations and makes it possible to determine malicious nodes. Q-learning routing strategy The routing strategy uses weighted rewards according to energy, trust and latency in updating paths such that it favors nodes with high-trust and high-Q-value. The nodes dynamically revise the trust and Q-values about the received feedback during transmission of data. The sink node keeps on broadcasting the global updates of the updated trust thresholds and routing updates. The simulation outcomes have indicated that WaveTrust is better than T-AODV, FuzzyTrust on the basis of packet delivery ratio, detection accuracy, energy consumption, routing overhead and an apparent strength on the capability to work in dynamic and resource limited underwater setting. This creates the impression that WaveTrust is quite flexible protocol and has the capability of providing secure and energy efficient routing in UWSNs.
Authors - Harita Venkatesan Abstract - Fusion-based multimodal models typically assume full modality availability at inference, an assumption that often fails in real-world settings. When a modality is missing, common strategies such as zerovector masking or unimodal fallback can lead to unstable predictions. We propose CORE, an embedding-level framework that completes multimodal representations by integrating original and cross-modally reconstructed embeddings in a fusion-consistent manner prior to fusion. CORE employs lightweight bidirectional cross-modal imagination networks with a cycle-consistency constraint to preserve shared semantic structure across modalities. The model is trained with stochastic modality dropout, enabling unified inference under complete and incomplete modality configurations. Experiments on a multimodal MRI–text classification task for lumbar spine analysis demonstrate that CORE yields more stable predictions than zero-vector masking under severe modality absence, while maintaining comparable performance when all modalities are present.
Authors - Latha N. R., Pallavi G B, Shyamala G., Abubakar Mohammedshafee Matte, Aditya Dinesh Netrakar, Akshara Singa, Akshata Hosmani Abstract - Tourism has become a strategic pillar in China’s transition toward a service-oriented economy, the world cultural heritage sites play an important role in promoting cultural–tourism integration in both China and global. The Dazu Rock Carvings is located in Chongqing, well known by their unique synthesis of Buddhist, and Taoist ideas and their wonderful stone-carving artistry. Recently, the Dazu site received growing number in tourist arrivals and tourism-related revenue due to the regional rapid development as well as the strategic support; however, compared with other outstanding heritage destinations such as the Mogao Grottoes, the reception capacity, product diversity, brand influence, and market performance of Dazu still remain relatively weak. This study adopts a mixed qualitative–quantitative case study design. Data are collected from official tourism statistics and cultural heritage management reports published by national and local authorities in between 2018-2024. Descriptive analysis is used to explore the trends in tourist arrivals, tourism revenue, and related industrial effects. Based on the findings, the study identifies key dimensisons on sustainable development and proposes a marketing path centered on cultural IP empowerment, industrial ecosystem construction, and digital technology-driven innovation, offering practical guidance for similar heritage destinations.
Authors - Deepa V, Atul Anilkumar, Sheena Susan Andrews Abstract - Organizations are rapidly embedding artificial intelligence (AI), including generative AI, into core business functions, but making AI sustainable across environmental, social, and economic dimensions is still challenging, especially when data governance is weak. Public estimates suggest data centres consumed roughly 415 TWh of electricity in 2024 and may rise toward ~945 TWh by 2030 under a base-case trajectory, while reported AI-related incidents reached a new high in 2024. In parallel, industry signals point to fast enterprise adoption of GenAI and ongoing leakage of sensitive information through tools that are not properly governed. Taken together, these patterns increase sustainability risks that are often data-mediated in practiceshaped by data quality and representativeness, provenance and documentation, access control, privacy protections, and end-to-end lifecycle management. Although data governance is widely seen as “foundational” to responsible AI, the concrete mechanisms linking governance capabilities to sustainable AI outcomes, and the ways to measure them, remain dispersed across data management, AI governance, and sustainability research. This paper consolidates peer-reviewed research, public standards, and open industry evidence to position data governance as an operational, measurable capability for Sustainable AI, one that converts sustainability goals into decision rights, lifecycle controls, and auditable outcomes. It contributes: (i) a capability-based taxonomy of data governance tailored to AI lifecycles; (ii) six evidence-grounded impact pathways showing how governance mechanisms influence outcomes (quality and fairness; documentation and auditability; privacy and security; interoperability and reuse; lifecycle stewardship; and sustainability instrumentation); and (iii) the Sustainable AI Data Governance Impact Model (SAI-DGIM), accompanied by testable hypotheses (H1–H8) and a KPI-oriented measurement framework that can be validated using survey constructs, system telemetry, and governance artifacts. For practitioners, the model offers a practical roadmap to embed governance controls directly into AI delivery workflows and treat sustainability metrics as release criteria, not just retrospective reporting. For researchers, it provides aligned constructs, hypotheses, and measurement guidance to rigorously assess how organizational data governance shapes Sustainable AI outcomes at scale.
Authors - Nhat Ho Minh, Long Le Pham Tien, Kien Nguyen Trung, An Pham Nam, Trong Nhan Phan Abstract - The fast increase in the number of unstructured digital documents in academic, industrial, and personal fields has generated an urgent requirement to have intelligent systems to read, arrange and structure document automatically. Traditional document organization methods have traditionally been heavily based on either manual intervention or rule-based methods, neither scalable nor efficient nor error free. The current paper is a multimodal AI architecture to assist document under-understanding and structuring that uses large language models (LLMs) and vision language models to handle heterogeneous document types. The suggested framework does semantic metadata extraction, classification of documents as well as structural organization of textual and visual documents. It uses a modular three-layer design, including an AI processing layer, service oriented backend, and cross platform user interfaces. The system is also developed to support secure functioning in the offline mode, which guarantees the privacy of data and the low-latency processing. The effectiveness of the pro-posed frame-work has been proved through experimental assessment, as it will be seen that classifying documents and categorizing images are very precise. The findings show that multimodal AI is remarkably better in document understanding and automation than traditional systems.
Authors - S M Mazharul Hoque Chowdhury, Ruth West, Stephanie Ludi Abstract - The prediction of liver disease through clinical data analysis faces difficulties because current machine learning methods fail to handle class imbalance and produce incorrect probability assessments. The existing supervised and ensemble methods use fixed decision thresholds together with heuristic weighting methods which results in biased predictions that compromise their ability to achieve balanced performance. The research introduces CAL-WE++ which serves as a Calibration- Weighted Ensemble system that uses an MCC-Optimized Threshold to forecast liver disease. The system employs five-fold stratified cross-validation without data leakage to produce out-of-fold probability results. The model weights are determined by evaluating both the model's ability to distinguish between outcomes (measured through ROC-AUC) and its accuracy in predicting probabilities (assessed through Expected Calibration Error ECE). The Matthews Correlation Coefficient (MCC) serves as the optimization method to determine the final classification threshold which helps to solve class imbalance problems. The Indian Liver Patient Dataset (583 records; 416 diseased, 167 non-diseased) experiments show that CAL-WE++ achieves a mean cross-validation MCC of 0.3474 and a test MCC of 0.4487 which exceeds the performance of baseline classifiers. The model achieves a ROC-AUC score of 0.8140 and a PR-AUC score of 0.9272 while maintaining a low ECE value of 0.0774 which demonstrates strong ability to distinguish between different outcomes and accurate probability assessments. The CAL-WE++ framework offers medical professionals a decision-making system that maintains balance between multiple criteria while delivering dependable outcomes for medical datasets with unequal class distributions.
Authors - Nidhi Pruthi, Rajiv Singh, Swati Nigam Abstract - Automatic Speech Recognition (ASR) systems have achieved remarkable progress through deep learning and Transformer-based architectures, demonstrating near-human accuracy on clean audio. However, their performance degrades significantly under challenging conditions and specialized domains. This comprehensive study evaluates leading commercial ASR APIs—Google Cloud Speech-to-Text, Microsoft Azure Speech Service, AssemblyAI, Deepgram, OpenAI Whisper, Speechmatics, and others—across multiple dimensions: general speech recognition, low-quality forensic-like audio, domain-specific mathematical notation, and personalized speaker adaptation. Results demonstrate 100% accuracy on clean audio for leading systems (Deepgram, Speechmatics, Webkit SpeechRecognition), but dramatic performance degradation to 10− 81% word error rates on forensic-like audio. Analysis of domain-specific challenges reveals that none of the tested commercial ASR systems natively support direct transcription of mathematical symbols and Greek letters into structured symbolic output (e.g., LaTeX). The study identifies critical limitations in robustness, modularity, and domain adaptation, while highlighting promising customization mechanisms including custom vocabularies, language models, and post-processing integration. Performance improvements through speaker personalization ranged from 3% for natural voices to 10% for synthetic voices. Despite notable advances in end-to-end and Transformer-based approaches, ASR systems remain unsuitable for forensic applications and specialized domains without substantial customization and post-processing. Future research must address low-resource performance, linguistic diversity, robustness in extreme noise, and the integration of Large Language Models for semantic understanding. This paper synthesizes recent advances and critical gaps, providing a roadmap for advancing ASR technology in specialized and challenging acoustic environments.
Authors - G Naga sree suma, A. Kamala kumari Abstract - The existence of a growing social media has created complex cyber systems in which vast quantities of interactions constitute substantial issues regarding misinformation, privacy invasion, deception of identities, and destructive behavioural tendencies. The regularity of involvement in this type of big systems requires sophisticated systems that are able to judge the motive of the user, content validity and suspicious activities within real time. Overall interest will be to develop a universal trust calculation system that will be more secure and effective in ensuring privacy and increasing the accuracy of suspicious or malicious users in social sites. The proposed Multi-Layer Federated Trust Framework algorithm is a combination of peer-based user reputation scoring, feature-based content authenticity detection, federated trust indicators aggregation, and anomaly detection with the help of behavioural anomalies. These approaches cooperate with secure aggregation and decentralized learning in removing the uncoded information exposure and enable the computation of trust at scale. The proposed algorithm is experimentally confirmed, and the obtained results are 95.2, 94.1, 93.5, and 93.8, corresponding to a minimum latency of 65 ms and a privacy preservation score of 0.98. The general results indicate a viable and holistic response that adds to secure interactions, blocks malicious acts and encourages trust in the actual social media settings.