Skip to content

Latest commit

 

History

History
80 lines (51 loc) · 2.89 KB

README.md

File metadata and controls

80 lines (51 loc) · 2.89 KB

NSDNet: Non-aligned Supervision for Real Image Dehazing

Junkai Fan, Fei Guo, Xiang Li, Jianjun Qian, Jun li* and Jian Yang*
(* indicates the corresponding author)
PCA Lab, Nanjing University of Science and Technology;

Paper Website

🔥 Updates

  • [18-08-2024] We have released the PhoneHazy dataset (real-world hazy scenarios).

Video Demo (real-world hazy video)

Network Architecture

Overall pipeline of our non-aligned supervision framework with physical priors for the real-world image dehazing. It includes the mvSA and non-aligned supervision modules. mvSA can effectively estimate the infinite airlight A∞ in real scenes. Our framework is different from the supervised dehazing models as it does not require aligned ground truths.

Phone-Hazy Dataset

Our phone-hazy dataset contains 415 non-aligned image pairs with four primary scenes: buildings, urban highways, rural cement roads, and outdoor landscapes. The haze levels mainly vary within a visibility range of 0 to 50 meters.

PhoneHazy dataset can be downloaded here (quf8)

Results on Smoke Dataset

Results on Phone-Hazy Dataset

Results on RTTS Dataset

🛠️ Setup

  • Ubuntu 18.04
  • Python == 3.9
  • PyTorch == 1.11 with CUDA 11.3
  • torchvision ==0.12.0
  • numpy == 1.22.3

🎓 Citation

If you are interested in this work, please consider citing:

@article{fan2023non,
  title={Non-aligned supervision for Real Image Dehazing},
  author={Fan, Junkai and Guo, Fei and Qian, Jianjun and Li, Xiang and Li, Jun and Yang, Jian},
  journal={arXiv preprint arXiv:2303.04940},
  year={2023}
}

@inproceedings{fan2024driving,
  title={Driving-Video Dehazing with Non-Aligned Regularization for Safety Assistance},
  author={Fan, Junkai and Weng, Jiangwei and Wang, Kun and Yang, Yijun and Qian, Jianjun and Li, Jun and Yang, Jian},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={26109--26119},
  year={2024}
}

Acknowledgment

This code is based on the CycleGAN. Thank them for their outstanding work.

Contact

If you have any question or suggestion, please contact [email protected].