Gated Fusion Network with Reprogramming Transformer Refiners for Adaptive Underwater Image Dehazing

Lyes Saad Saoud and Irfan Hussain

Khalifa University of Science and Technology, UAE | Preprint 2025

GFN system illustration

Architecture of the proposed underwater image dehazing model. The Gated Fusion Network (GFN) integrates a Swin Transformer for multi-scale feature extraction and confidence map generation. Reprogramming Adaptive Transformation Units (RATUs) apply targeted enhancements, including white balancing, gamma correction, and histogram equalization, based on the confidence maps. The gated fusion mechanism selectively combines these enhanced features to produce a refined output with improved contrast, color balance, and visibility across diverse underwater conditions.

Before and After Dehazing

Before Dehazing After Dehazing
Before Dehazing After Dehazing
Before Dehazing After Dehazing
Before Dehazing After Dehazing

Abstract

Underwater image quality degradation due to light absorption, scattering, and low illumination significantly hinders visual clarity and usability in critical applications such as marine research, robotics, and environmental monitoring. Standard enhancement techniques often struggle to generalize across varying underwater conditions, limiting their effectiveness in real-world applications. To overcome these challenges, we propose the Gated Fusion Network (GFN), a novel deep learning framework that integrates a Swin Transformer backbone with Reprogramming Adaptive Transformation Units (RATUs) to perform adaptive underwater image enhancement. GFN utilizes the Swin Transformer to extract multi-scale contextual features and generate confidence maps that guide RATUs in applying targeted image corrections, including white balancing, gamma correction, and histogram equalization. This adaptive processing ensures that enhancements are applied selectively based on local scene characteristics, effectively restoring color balance, contrast, and fine details. A gated fusion mechanism then selectively integrates these enhanced outputs, optimizing the overall visual quality while reducing noise and artifacts. Extensive evaluations on real-world underwater datasets demonstrate that GFN consistently outperforms existing approaches. On the EUVP dataset, GFN achieves an average PSNR improvement of 3.1 dB over WaterNet, with similar gains observed on ocean\_ex and LSUI400. These results establish a new benchmark for underwater image enhancement, offering a robust and adaptable solution for applications that require high-quality underwater imagery. For interactive visualizations, animations, source code, and access to the preprint, visit: https://lyessaadsaoud.github.io/GFN/.

GFN system illustration

The proposed Reprogramming Adaptive Transformation Units (RATUs) for context-aware enhancement. The yellow blocks indicate the new components introduced in our model

BibTeX


@article{GFN2024,
  author = {Saad Saoud, Lyes et al.},
  title = {GFN: Gated Fusion Network for Underwater Image Enhancement},
  year = {2024},
  publisher = {Preprint},
  doi = {......},
  url = {https://arxiv.org/...}}