FreqPatchNet: A Dual-Domain Patch-Wise Fusion Network for Robust Phase Correction in Underwater Image Reconstruction

Authors

  • Deepthi Chamkur V. Electronics and Communication Engineering, School of Engineering, Dayananda Sagar University, Harohalli, Bangalore South, Karnataka, India
  • Lohit Bingi Innovative Business Concepts Inc., Somerset, New Jersey, USA
  • R. S. Shoma Information Science and Engineering Department, Cambridge Institute of Technology, K R Puram, Bangalore, Karnataka, India
  • K. Arpitha Computer Science and Engineering Department, BGS Institute of Technology, Adichunchanagiri University, BG Nagara, Mandya, Karnataka, India
  • C. Lohith Department of CSE - (IoT & CSBT), East Point College of Engineering and Technology, Bangalore, Karnataka, India
  • A. K. Vasumathi Computer Science and Engineering Department, Cambridge Institute of Technology, Kr Puram, Bangalore, Karnataka. India
  • A. Gnanasundari Dr. APJ Abdul Kalam School of Engineering, Garden City University, Bangalore, Karnataka, India
Volume: 15 | Issue: 5 | Pages: 26771-26776 | October 2025 | https://doi.org/10.48084/etasr.12990

Abstract

This paper presents FreqPatchNet, a novel patch-wise dual-domain Convolutional Neural Network (CNN) designed to correct phase distortions in underwater images. The model uses bispectral frequency features and local CNN regression to reconstruct clean images from distorted inputs. Evaluated using Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE), FreqPatchNet achieves a maximum PSNR of 35.6 dB and a lowest MSE of 0.28 at 10% distortion. A comparative analysis with state-of-the-art methods shows the superior performance of the proposed model in structural similarity. Real-world tests confirm its potential for underwater robotics and vision applications.

Keywords:

underwater image enhancement, phase distortion correction, bispectrum features, patch-wise convolutional neural network, FreqPatchNet, frequency domain analysis, deep learning, SSIM, PSNR, MSE, image reconstruction, sinusoidal attack modeling, underwater robotics, UCIQE, UIQM

Downloads

Download data is not yet available.

References

K. Sun and Y. Tian, "DBFNet: A Dual-Branch Fusion Network for Underwater Image Enhancement," Remote Sensing, vol. 15, no. 5, Jan. 2023, Art. no. 1195.

L. Wang, X. Li, K. Li, Y. Mu, M. Zhang, and Z. Yue, "Underwater image restoration based on dual information modulation network," Scientific Reports, vol. 14, no. 1, Mar. 2024, Art. no. 5416.

S. Liu, H. Fan, S. Lin, Q. Wang, N. Ding, and Y. Tang, "Adaptive Learning Attention Network for Underwater Image Enhancement," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5326–5333, Apr. 2022.

Y. Li and R. Chen, "UDA-Net: Densely attention network for underwater image enhancement," IET Image Processing, vol. 15, no. 3, pp. 774–785, 2021.

P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, and Z. Yu, "Underwater Image Enhancement With a Deep Residual Framework," IEEE Access, vol. 7, pp. 94614–94629, 2019.

H. H. Yang, K. C. Huang, and W. T. Chen, "LAFFNet: A Lightweight Adaptive Feature Fusion Network for Underwater Image Enhancement," in 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, May 2021, pp. 685–692.

Q. Qi et al., "Underwater Image Co-Enhancement With Correlation Feature Matching and Joint Learning," IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1133–1147, Mar. 2022.

C. Li, S. Anwar, and F. Porikli, "Underwater scene prior inspired deep underwater image and video enhancement," Pattern Recognition, vol. 98, Feb. 2020, Art. no. 107038.

S. Adagale-Vairagar, P. Gupta, and R. P. Sharma, "Underwater Image Enhancement using Convolution Denoising Network and Blind Convolution," Engineering, Technology & Applied Science Research, vol. 15, no. 1, pp. 19408–19416, Feb. 2025.

Y. Li, Z. Zhao, and R. Li, "Dual-domain feature aggregation transformer network for underwater image enhancement," Signal, Image and Video Processing, vol. 19, no. 3, Mar. 2025, Art. no. 248.

S. Tian, A. Sirikham, J. Konpang, and C. Wang, "High-Dimensional Attention Generative Adversarial Network Framework for Underwater Image Enhancement," Electronics, vol. 14, no. 6, Jan. 2025, Art. no. 1203.

Z. Chen, K. Pawar, M. Ekanayake, C. Pain, S. Zhong, and G. F. Egan, "Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging—State-of-the-Art and Challenges," Journal of Digital Imaging, vol. 36, no. 1, pp. 204–230, Nov. 2022.

L. Alzubaidi et al., "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions," Journal of Big Data, vol. 8, no. 1, Mar. 2021, Art. no. 53.

S. Liu, H. Fan, Q. Wang, Z. Han, Y. Guan, and Y. Tang, "Wavelet–pixel domain progressive fusion network for underwater image enhancement," Knowledge-Based Systems, vol. 299, Sep. 2024, Art. no. 112049.

H. Li and P. Zhuang, "DewaterNet: A fusion adversarial real underwater image enhancement network," Signal Processing: Image Communication, vol. 95, Jul. 2021, Art. no. 116248.

K. Chen, J. Liu, and H. Zhang, "IGT: Illumination-guided RGB-T object detection with transformers," Knowledge-Based Systems, vol. 268, May 2023, Art. no. 110423.

D. Wang and Z. Sun, "Frequency Domain Based Learning with Transformer for Underwater Image Restoration," in PRICAI 2022: Trends in Artificial Intelligence, vol. 13629, S. Khanna, J. Cao, Q. Bai, and G. Xu, Eds. Springer Nature Switzerland, 2022, pp. 218–232.

Downloads

How to Cite

[1]
D. Chamkur V., “FreqPatchNet: A Dual-Domain Patch-Wise Fusion Network for Robust Phase Correction in Underwater Image Reconstruction”, Eng. Technol. Appl. Sci. Res., vol. 15, no. 5, pp. 26771–26776, Oct. 2025.

Metrics

Abstract Views: 177
PDF Downloads: 23

Metrics Information