Authors - C Ashik Poojary, Chirag B Jogi, Sanath Shetty, Sandhya P, Mahitha G Abstract - Image inpainting plays an important role in restoring and reconstructing degraded or damaged images by filling in missing regions. This work proposes a gated convolutional neural network based on a U-Net architecture to achieve perceptually accurate and high-resolution restoration. The model was trained on a large-scale dataset of over 20,000 images generated with the CelebA dataset along with extensive enhancement using artificial damages such as scratches, cracks, random patches, blurring, sepia-toning, and grayscale degradation. The proposed method performs two phases of restoration: context-aware inpainting, followed by resolution enhancement while preserving both global structure and local texture. Quantitative metrics such as PSNR and SSIM were evaluated, and qualitative comparisons demonstrate faithful texture synthesis and tone-consistent fills across color, grayscale, and sepia domains.