Authors - Anil Kumar Bandani, Anupama Bollampally, Ramesh Deshpande B Saritha, P Rajesh Abstract - Transformer-based models in modern applications struggle with continual learning due to catastrophic forgetting. This paper presents Lapis Whale, a framework that incorporates a Selective Replay Utilization Mechanism (SERUM) to help a model retain previously learned knowledge while adapting to new tasks. The approach leverages a memory buffer to replay representative samples from earlier tasks during training. Experiments on the CIFAR-100 dataset show improved accuracy retention and reduced forgetting compared to standard fine-tuning methods. The framework is computationally efficient and well-suited for real-world adaptive AI systems.