FreqPatchNet: A Dual-Domain Patch-Wise Fusion Network for Robust Phase Correction in Underwater Image Reconstruction
Received: 27 June 2025 | Revised: 15 July 2025 | Accepted: 20 July 2025 | Online: 2 August 2025
Corresponding author: C. Lohith
Abstract
This paper presents FreqPatchNet, a novel patch-wise dual-domain Convolutional Neural Network (CNN) designed to correct phase distortions in underwater images. The model uses bispectral frequency features and local CNN regression to reconstruct clean images from distorted inputs. Evaluated using Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE), FreqPatchNet achieves a maximum PSNR of 35.6 dB and a lowest MSE of 0.28 at 10% distortion. A comparative analysis with state-of-the-art methods shows the superior performance of the proposed model in structural similarity. Real-world tests confirm its potential for underwater robotics and vision applications.
Keywords:
underwater image enhancement, phase distortion correction, bispectrum features, patch-wise convolutional neural network, FreqPatchNet, frequency domain analysis, deep learning, SSIM, PSNR, MSE, image reconstruction, sinusoidal attack modeling, underwater robotics, UCIQE, UIQMDownloads
References
K. Sun and Y. Tian, "DBFNet: A Dual-Branch Fusion Network for Underwater Image Enhancement," Remote Sensing, vol. 15, no. 5, Jan. 2023, Art. no. 1195.
L. Wang, X. Li, K. Li, Y. Mu, M. Zhang, and Z. Yue, "Underwater image restoration based on dual information modulation network," Scientific Reports, vol. 14, no. 1, Mar. 2024, Art. no. 5416.
S. Liu, H. Fan, S. Lin, Q. Wang, N. Ding, and Y. Tang, "Adaptive Learning Attention Network for Underwater Image Enhancement," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5326–5333, Apr. 2022.
Y. Li and R. Chen, "UDA-Net: Densely attention network for underwater image enhancement," IET Image Processing, vol. 15, no. 3, pp. 774–785, 2021.
P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, and Z. Yu, "Underwater Image Enhancement With a Deep Residual Framework," IEEE Access, vol. 7, pp. 94614–94629, 2019.
H. H. Yang, K. C. Huang, and W. T. Chen, "LAFFNet: A Lightweight Adaptive Feature Fusion Network for Underwater Image Enhancement," in 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, May 2021, pp. 685–692.
Q. Qi et al., "Underwater Image Co-Enhancement With Correlation Feature Matching and Joint Learning," IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1133–1147, Mar. 2022.
C. Li, S. Anwar, and F. Porikli, "Underwater scene prior inspired deep underwater image and video enhancement," Pattern Recognition, vol. 98, Feb. 2020, Art. no. 107038.
S. Adagale-Vairagar, P. Gupta, and R. P. Sharma, "Underwater Image Enhancement using Convolution Denoising Network and Blind Convolution," Engineering, Technology & Applied Science Research, vol. 15, no. 1, pp. 19408–19416, Feb. 2025.
Y. Li, Z. Zhao, and R. Li, "Dual-domain feature aggregation transformer network for underwater image enhancement," Signal, Image and Video Processing, vol. 19, no. 3, Mar. 2025, Art. no. 248.
S. Tian, A. Sirikham, J. Konpang, and C. Wang, "High-Dimensional Attention Generative Adversarial Network Framework for Underwater Image Enhancement," Electronics, vol. 14, no. 6, Jan. 2025, Art. no. 1203.
Z. Chen, K. Pawar, M. Ekanayake, C. Pain, S. Zhong, and G. F. Egan, "Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging—State-of-the-Art and Challenges," Journal of Digital Imaging, vol. 36, no. 1, pp. 204–230, Nov. 2022.
L. Alzubaidi et al., "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions," Journal of Big Data, vol. 8, no. 1, Mar. 2021, Art. no. 53.
S. Liu, H. Fan, Q. Wang, Z. Han, Y. Guan, and Y. Tang, "Wavelet–pixel domain progressive fusion network for underwater image enhancement," Knowledge-Based Systems, vol. 299, Sep. 2024, Art. no. 112049.
H. Li and P. Zhuang, "DewaterNet: A fusion adversarial real underwater image enhancement network," Signal Processing: Image Communication, vol. 95, Jul. 2021, Art. no. 116248.
K. Chen, J. Liu, and H. Zhang, "IGT: Illumination-guided RGB-T object detection with transformers," Knowledge-Based Systems, vol. 268, May 2023, Art. no. 110423.
D. Wang and Z. Sun, "Frequency Domain Based Learning with Transformer for Underwater Image Restoration," in PRICAI 2022: Trends in Artificial Intelligence, vol. 13629, S. Khanna, J. Cao, Q. Bai, and G. Xu, Eds. Springer Nature Switzerland, 2022, pp. 218–232.
Downloads
How to Cite
License
Copyright (c) 2025 Chamkur V. Deepthi, Lohit Bingi, R. S. Shoma, K. Arpitha, C. Lohith, A. K. Vasumathi, A. Gnanasundari

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.