- PyTorch 1.13.0, torchvision 0.15.0. The code is tested with python=3.8, cuda=11.0.
- A GPU with enough memory
- We used the HCI 4D LF benchmark for training and evaluation. Please refer to the benchmark website for details.
.
├── dataset
│ ├── training
│ └── validation
├── Figure
│ ├── paper_picture
│ └── hardware_picture
├── Hardware
│ ├── L3FNet
│ │ ├── bit_files
│ │ ├── hwh_files
│ │ └── project_code
│ ├── Net_prune
│ │ ├── bit_files
│ │ └── hwh_files
│ ├── Net_w2bit
│ │ ├── bit_files
│ │ └── hwh_files
│ └── Net_w8bit
│ ├── bit_files
│ └── hwh_files
├── implement
│ ├── L3FNet_implementation
│ └── data_preprocessing
├── jupyter
│ ├── network_execution_scripts
│ └── algorithm_implementation_scripts
├── model
│ ├── network_functions
│ └── regular_functions
├── param
│ └── checkpoints
└── Results
├── our_network
│ ├── Net_Full
│ └── Net_Quant
├── Necessity_analysis
│ ├── Net_3D
│ ├── Net_99
│ └── Net_Undpp
└── Performance_improvement_analysis
├── Net_Unprune
├── Net_8bit
├── Net_w2bit
├── Net_w8bit
└── Net_prune
-
Set the hyper-parameters in parse_args() if needed. We have provided our default settings in the realeased codes.
-
You can train the network by calling implement.py and giving the mode attribute to train.
python ../implement/implement.py --net Net_Full --n_epochs 3000 --mode train --device cuda:1
-
Checkpoint will be saved to ./param/'NetName'.
- After loading the weight file used by your domain, you can call implement.py and giving the mode attribute to valid or test.
- The result files (i.e., scene_name.pfm) will be saved to ./Results/'NetName'.
- ZCU104 platform
- A memory card with PYNQ installed.
For details on the initialization of PYNQ on ZCU104, please refer to the Chinese version of the blog "PYNQ". - Vivado Tool Kit (vivado, HLS, etc.)
- An Ubuntu with more than 16GB of memory (the Vivado tool is faster when used in Ubuntu)
See './Figure/hardware_picture/top.pdf'
If you find this work helpful, please consider citing:
Our paper is currently under submission
Welcome to raise issues or email to Chuanlun Zhang([email protected] or [email protected]) for any question regarding this work