Efficient Hybrid Network with Prompt Learning for Multi-Degradation Image Restoration
Abstract
Image restoration aims to repair degraded images. Traditional image restoration methods have limited generalization capabilities due to the difficulty in dealing with different types and levels of degradation. On the other hand, contemporary research has focused on multi-degradation image restoration by developing unified networks capable of handling various types of degradation. One promising approach is using prompts to provide additional information on the type of input images and the extent of degradation. Nonetheless, all-in-one image restoration requires a high computational cost, making it challenging to implement on resource-constrained devices. This research proposes a multi-degradation image restoration model based on PromptIR with lower computational cost and complexity. The proposed model is trained and tested on various datasets yet it is still practical for deraining, dehazing, and denoising tasks. By unification convolution, transformer, and dynamic prompt operations, the proposed model successfully reduces FLOPs by 32.07% and the number of parameters by 27.87%, with a comparable restoration result and an SSIM of 34.15 compared to 34.33 achieved by the original architecture for the denoising task.
Downloads
References
S. Cao, H. Fang, L. Chen, W. Zhang, Y. Chang, and L. Yan, “Robust Blind Deblurring Under Stripe Noise for Remote Sensing Images,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–17, 2022, doi: 10.1109/TGRS.2022.3202867.
B. Jiang, Y. Lu, X. Chen, X. Lu, and G. Lu, “Graph Attention in Attention Network for Image Denoising,” IEEE Trans. Syst. Man, Cybern. Syst., vol. 53, no. 11, pp. 7077–7088, 2023, doi: 10.1109/TSMC.2023.3289873.
C. Zhao, W. Cai, C. Hu, and Z. Yuan, “Cycle contrastive adversarial learning with structural consistency for unsupervised high-quality image deraining transformer,” Neural Networks, vol. 178, no. April, p. 106428, 2024, doi: 10.1016/j.neunet.2024.106428.
H. Sun et al., “Unsupervised Bidirectional Contrastive Reconstruction and Adaptive Fine-Grained Channel Attention Networks for image dehazing,” Neural Networks, vol. 176, no. April, p. 106314, 2024, doi: 10.1016/j.neunet.2024.106314.
S. Qiao and R. Chen, “Progressive feature fusion for SNR-aware low-light image enhancement,” J. Vis. Commun. Image Represent., vol. 100, no. April, p. 104148, 2024, doi: 10.1016/j.jvcir.2024.104148.
T. Gao et al., “Frequency-Oriented Efficient Transformer for All-in-One Weather-Degraded Image Restoration,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 3, pp. 1886–1899, 2024, doi: 10.1109/TCSVT.2023.3299324.
C. Zhang, Y. Zhu, Q. Yan, J. Sun, and Y. Zhang, “All-in-one Multi-degradation Image Restoration Network via Hierarchical Degradation Representation,” MM 2023 - Proc. 31st ACM Int. Conf. Multimed., pp. 2285–2293, 2023, doi: 10.1145/3581783.3611825.
P. W. Patil, S. Gupta, S. Rana, S. Venkatesh, and S. Murala, “Multi-weather Image Restoration via Domain Translation,” Proc. IEEE Int. Conf. Comput. Vis., no. Iccv, pp. 21639–21648, 2023, doi: 10.1109/ICCV51070.2023.01983.
J. Zhang et al., “Ingredient-oriented Multi-Degradation Learning for Image Restoration,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2023-June, pp. 5825–5835, 2023, doi: 10.1109/CVPR52729.2023.00564.
M. Siddiqua, S. Brahim Belhaouari, N. Akhter, A. Zameer, and J. Khurshid, “MACGAN: An All-in-One Image Restoration under Adverse Conditions Using Multidomain Attention-Based Conditional GAN,” IEEE Access, vol. 11, no. June, pp. 70482–70502, 2023, doi: 10.1109/ACCESS.2023.3289591.
Y. Ai, H. Huang, X. Zhou, J. Wang, and R. He, “Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration,” 2024 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 25432–25444, 2023, doi: 10.1109/CVPR52733.2024.02403.
Z. Luo, F. K. Gustafsson, Z. Zhao, J. Sjölund, and T. B. Schön, “Controlling Vision-Language Models for Multi-Task Image Restoration,” 12th Int. Conf. Learn. Represent. ICLR 2024, pp. 1–21, 2024.
H. Gao, J. Yang, Y. Zhang, N. Wang, J. Yang, and D. Dang, “Prompt-based Ingredient-Oriented All-in-One Image Restoration,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 1, pp. 9458–9471, 2024, doi: 10.1109/TCSVT.2024.3398810.
V. Potlapalli, S. W. Zamir, S. Khan, and F. S. Khan, “PromptIR: Prompting for All-in-One Blind Image Restoration,” Adv. Neural Inf. Process. Syst., vol. 36, no. NeurIPS, 2023.
D. Park, B. H. Lee, and S. Y. Chun, “All-in-One Image Restoration for Unknown Degradations Using Adaptive Discriminative Filters for Specific Degradations,” 2023 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 5815–5824, 2023, doi: 10.1109/cvpr52729.2023.00563.
W. Li, D. Fan, Q. Zhu, Z. Gao, and H. Sun, “HEDehazeNet: Unpaired image dehazing via enhanced haze generation,” Image Vis. Comput., vol. 150, no. June, 2024, doi: 10.1016/j.imavis.2024.105236.
F. Tao, Q. Chen, Z. Fu, L. Zhu, and B. Ji, “LID-Net: A lightweight image dehazing network for automatic driving vision systems,” Digit. Signal Process. A Rev. J., vol. 154, no. July, p. 104673, 2024, doi: 10.1016/j.dsp.2024.104673.
X. Li, H. Yu, C. Zhao, C. Fan, and L. Zou, “DADRnet: Cross-domain image dehazing via domain adaptation and disentangled representation,” Neurocomputing, vol. 544, 2023, doi: 10.1016/j.neucom.2023.126242.
Y. Zuo et al., “CFNet: Conditional filter learning with dynamic noise estimation for real image denoising,” Knowledge-Based Syst., vol. 284, no. December 2023, p. 111320, 2024, doi: 10.1016/j.knosys.2023.111320.
F. Liu, Z. Zhou, C. Men, Q. Sun, and K. Huang, “IFGLT: Information fusion guided lightweight Transformer for image denoising,” J. Vis. Commun. Image Represent., vol. 97, no. June, p. 103994, 2023, doi: 10.1016/j.jvcir.2023.103994.
A. Ulu, G. Yildiz, and B. Dizdaroglu, “MLFAN: Multilevel Feature Attention Network With Texture Prior for Image Denoising,” IEEE Access, vol. 11, no. April, pp. 34260–34273, 2023, doi: 10.1109/ACCESS.2023.3264604.
H. Yang, D. Zhou, J. Cao, and Q. Zhao, “DPNet: Detail-preserving image deraining via learning frequency domain knowledge,” Digit. Signal Process. A Rev. J., vol. 130, p. 103740, 2022, doi: 10.1016/j.dsp.2022.103740.
W. Zhou and L. Ye, “UC-former: A multi-scale image deraining network using enhanced transformer,” Comput. Vis. Image Underst., vol. 248, no. July, p. 104097, 2024, doi: 10.1016/j.cviu.2024.104097.
J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “SwinIR: Image Restoration Using Swin Transformer,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2021-October, pp. 1833–1844, 2021, doi: 10.1109/ICCVW54120.2021.00210.
Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A General U-Shaped Transformer for Image Restoration,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 17662–17672, 2022, doi: 10.1109/CVPR52688.2022.01716.
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M. H. Yang, “Restormer: Efficient Transformer for High-Resolution Image Restoration,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 5718–5729, 2022, doi: 10.1109/CVPR52688.2022.00564.
Z. Tu et al., “MAXIM: Multi-Axis MLP for Image Processing,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 5759–5770, 2022, doi: 10.1109/CVPR52688.2022.00568.
R. Li, R. T. Tan, and L. F. Cheong, “All in One Bad Weather Removal Using Architectural Search,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3172–3182, 2020, doi: 10.1109/CVPR42600.2020.00324.
H. Chen et al., “Pre-trained image processing transformer,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 12294–12305, 2021, doi: 10.1109/CVPR46437.2021.01212.
K. Wang, L. Zhuo, J. Li, T. Jia, and J. Zhang, “Learning an Enhancement Convolutional Neural Network for Multi-degraded Images,” Sens. Imaging, vol. 21, no. 1, pp. 1–15, 2020, doi: 10.1007/s11220-020-00289-0.
B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, and X. Peng, “All-In-One Image Restoration for Unknown Corruption,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 17431–17441, 2022, doi: 10.1109/CVPR52688.2022.01693.
Y. Xu, N. Gao, Z. Shan, F. Chao, and R. Ji, “Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration,” 2024, [Online]. Available: http://arxiv.org/abs/2401.13221
W. T. Chen, Z. K. Huang, C. C. Tsai, H. H. Yang, J. J. Ding, and S. Y. Kuo, “Learning Multiple Adverse Weather Removal via Two-stage Knowledge Learning and Multi-contrastive Regularization: Toward a Unified Model,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, no. i, pp. 17632–17641, 2022, doi: 10.1109/CVPR52688.2022.01713.
Y. Lu, D. Yang, Y. Gao, R. W. Liu, J. Liu, and Y. Guo, “AoSRNet: All-in-One Scene Recovery Networks via multi-knowledge integration,” Knowledge-Based Syst., vol. 294, 2024, doi: 10.1016/j.knosys.2024.111786.
M. Yao, R. Xu, Y. Guan, J. Huang, and Z. Xiong, “Neural Degradation Representation Learning for All-In-One Image Restoration,” vol. 14, no. 8, pp. 1–13, 2023, [Online]. Available: http://arxiv.org/abs/2310.12848
J. M. Jose Valanarasu, R. Yasarla, and V. M. Patel, “TransWeather: Transformer-based Restoration of Images Degraded by Adverse Weather Conditions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 2343–2353, 2022, doi: 10.1109/CVPR52688.2022.00239.
Y. Zhang, X. Ma, K. Huang, M. Li, and P. A. Heng, “Semantic-Oriented Visual Prompt Learning for Diabetic Retinopathy Grading on Fundus Images,” IEEE Trans. Med. Imaging, vol. 43, no. 8, pp. 2960–2969, 2024, doi: 10.1109/TMI.2024.3383827.
P. Zhu et al., “Prompt-Based Learning for Unpaired Image Captioning,” IEEE Trans. Multimed., vol. 26, pp. 379–393, 2024, doi: 10.1109/TMM.2023.3265842.
J. Fu, W. Zhou, Q. Jiang, H. Liu, and G. Zhai, “Vision-Language Consistency Guided Multi-Modal Prompt Learning for Blind AI Generated Image Quality Assessment,” IEEE Signal Process. Lett., vol. 31, pp. 1820–1824, 2024, doi: 10.1109/LSP.2024.3420083.
K. Gao, X. You, K. Li, L. Chen, J. Lei, and X. Zuo, “Attention Prompt-Driven Source-Free Adaptation for Remote Sensing Images Semantic Segmentation,” IEEE Geosci. Remote Sens. Lett., vol. 21, 2024, doi: 10.1109/LGRS.2024.3422805.
M. Jia et al., “Visual Prompt Tuning,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 13693 LNCS, pp. 709–727, 2022, doi: 10.1007/978-3-031-19827-4_41.
L. Chen, X. Chu, X. Zhang, and J. Sun, “Simple Baselines for Image Restoration,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 13667 LNCS, pp. 17–33, 2022, doi: 10.1007/978-3-031-20071-7_2.
S. W. Zamir et al., “Multi-stage progressive image restoration,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826, 2021, doi: 10.1109/CVPR46437.2021.01458.
W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 1685–1694, 2017, doi: 10.1109/CVPR.2017.183.
K. Ma et al., “Waterloo exploration database: New challenges for image quality assessment models,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 1004–1016, 2017, doi: 10.1109/TIP.2016.2631888.
D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2, pp. 416–423, 2001, doi: 10.1109/ICCV.2001.937655.
B. Li et al., “Benchmarking Single-Image Dehazing and beyond,” IEEE Trans. Image Process., vol. 28, no. 1, pp. 492–505, 2019, doi: 10.1109/TIP.2018.2867951.
M. Y. Hossain, M. M. H. Rakib, S. Rajit, I. R. Nijhum, and R. M. Rahman, “Adaptive and automatic aerial image restoration pipeline leveraging pre-trained image restorer with lightweight Fully Convolutional Network,” Expert Syst. Appl., vol. 259, no. March 2024, p. 125210, 2025, doi: 10.1016/j.eswa.2024.125210.
N. Venkatanath, D. Praneeth, B. H. Maruthi Chandrasekhar, S. S. Channappayya, and S. S. Medasani, “Blind image quality evaluation using perception based features,” 2015 21st Natl. Conf. Commun. NCC 2015, pp. 1–6, 2015, doi: 10.1109/NCC.2015.7084843.
A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘completely blind’ image quality analyzer,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, 2013, doi: 10.1109/LSP.2012.2227726.
Copyright (c) 2025 Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright in each article belongs to the author
- The author acknowledges that the RESTI Journal (System Engineering and Information Technology) is the first publisher to publish with a license Creative Commons Attribution 4.0 International License.
- Authors can enter writing separately, arrange the non-exclusive distribution of manuscripts that have been published in this journal into other versions (eg sent to the author's institutional repository, publication in a book, etc.), by acknowledging that the manuscript has been published for the first time in the RESTI (Rekayasa Sistem dan Teknologi Informasi) journal ;