Official PyTorch implementation of DeepLIR. Code was modified from this repo
helper
: This directory consists of helper functions.models
: This directory contains files required for defining the ADMM model and U-Net, a U-Net architecture with several modifications including transformer attention, and anti-aliasing down/up-sampling.samples_images
: Contains sample images to test the code.train.py
: Contains training code.result_test.py
: Evaluation script.Reconstruction_demo.ipynb
: Jupyter notebook for reconstruction demo.
-
Lensless Learning Dataset: Full Training dataset
-
In the Wild Images: In addition, the 'in the wild' images taken without a computer monitor can be found here.
You can download the model weights here and place it in the root directory.
If you use our work in your research or wish to refer to the benchmarks provided, please cite our paper as follows:
@InProceedings{Poudel_2024_WACV,
author = {Poudel, Arpan and Nakarmi, Ukash},
title = {DeepLIR: Attention-Based Approach for Mask-Based Lensless Image Reconstruction},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},
month = {January},
year = {2024},
pages = {431-439}
}