This is an official PyTorch implementation of the E2NeRF. Click here to see the video and supplementary materials in our project website.
The code is based on nerf-pytorch. Please refer to nerf-pytorch for the environment installation.
Download the dataset here. The dataset contains the "data" for training and the "original data".
For the file of each scene, there are training images in the "train" file and the corresponding event data "events.pt". The ground truth images are in the "test" file.
Like in original NeRF, the training and testing poses are in the "transform_train.json" file and "transform_test.json" file. Notice that at the test time, we use the first pose of each view in "transform_test.json" to render the test images and the Ground Truth images are also rendered at this pose.
The structure is like original NeRF's llff data and the event data is in "event.pt".
For easy reading, we transform the event stream in to event bins as event.pt file. You can use pytorch to load the file. The shape of the tensor is (view_number, bin_number, H, W) and each element means the number of the events (positive and negative indicate polarity).
There are original images for synthesizing the blurry image and the code. Besides, we supply the original event data generated from v2e. We also provide the code to transform the ".txt" event to "events.pt" for E2NeRF training.
We supply the original ".aedat4" data captured by davis346 and the processing code in the file. We also convert the event data into events.pt for training.
We update the EDI code in the repository. You can use this code to deblur the images in the "train" file with corresponding events.pt data. And the deblurred images are saved at "images_for_colmap" file. Then, you can use colmap to generate the poses as in NeRF.
If you find this useful, please consider citing our paper:
@inproceedings{qi2023e2nerf,
title={E2nerf: Event enhanced neural radiance fields from blurry images},
author={Qi, Yunshan and Zhu, Lin and Zhang, Yu and Li, Jia},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={13254--13264},
year={2023}
}
The overall framework are derived from nerf-pytorch. We appreciate the effort of the contributors to these repositories.