这是一个车道线检测模型,在知乎答主梦里寻梦的项目基础上完善了数据集制作、检查和模型测试流程。
知乎链接:https://zhuanlan.zhihu.com/p/608319948
原作者github链接:https://github.com/cfzd/Ultra-Fast-Lane-Detection
PyTorch implementation of the paper "Ultra Fast Structure-aware Deep Lane Detection".
[July 18, 2022] Updates: The new version of our method has been accepted by TPAMI 2022. Code is available here.
[June 28, 2021] Updates: we will release an extended version, which improves 6.3 points of F1 on CULane with the ResNet-18 backbone compared with the ECCV version.
Updates: Our paper has been accepted by ECCV2020.
The evaluation code is modified from SCNN and Tusimple Benchmark.
Caffe model and prototxt can be found here.
Please see INSTALL.md
First of all, please modify data_root
and log_path
in your configs/culane.py
or configs/tusimple.py
config according to your environment.
data_root
is the path of your CULane dataset or Tusimple dataset.log_path
is where tensorboard logs, trained models and code backup are stored. It should be placed outside of this project.
For single gpu training, run
python train.py configs/path_to_your_config
For multi-gpu training, run
sh launch_training.sh
or
python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py configs/path_to_your_config
If there is no pretrained torchvision model, multi-gpu training may result in multiple downloading. You can first download the corresponding models manually, and then restart the multi-gpu training.
Since our code has auto backup function which will copy all codes to the log_path
according to the gitignore, additional temp file might also be copied if it is not filtered by gitignore, which may block the execution if the temp files are large. So you should keep the working directory clean.
Besides config style settings, we also support command line style one. You can override a setting like
python train.py configs/path_to_your_config --batch_size 8
The batch_size
will be set to 8 during training.
To visualize the log with tensorboard, run
tensorboard --logdir log_path --bind_all
We provide two trained Res-18 models on CULane and Tusimple.
Dataset | Metric paper | Metric This repo | Avg FPS on GTX 1080Ti | Model |
---|---|---|---|---|
Tusimple | 95.87 | 95.82 | 306 | GoogleDrive/BaiduDrive(code:bghd) |
CULane | 68.4 | 69.7 | 324 | GoogleDrive/BaiduDrive(code:w9tw) |
For evaluation, run
mkdir tmp
# This a bad example, you should put the temp files outside the project.
python test.py configs/culane.py --test_model path_to_culane_18.pth --test_work_dir ./tmp
python test.py configs/tusimple.py --test_model path_to_tusimple_18.pth --test_work_dir ./tmp
Same as training, multi-gpu evaluation is also supported.
We provide a script to visualize the detection results. Run the following commands to visualize on the testing set of CULane and Tusimple.
python demo.py configs/culane.py --test_model path_to_culane_18.pth
# or
python demo.py configs/tusimple.py --test_model path_to_tusimple_18.pth
Since the testing set of Tusimple is not ordered, the visualized video might look bad and we do not recommend doing this.
To test the runtime, please run
python speed_simple.py
# this will test the speed with a simple protocol and requires no additional dependencies
python speed_real.py
# this will test the speed with real video or camera input
It will loop 100 times and calculate the average runtime and fps in your environment.
@InProceedings{qin2020ultra,
author = {Qin, Zequn and Wang, Huanyu and Li, Xi},
title = {Ultra Fast Structure-aware Deep Lane Detection},
booktitle = {The European Conference on Computer Vision (ECCV)},
year = {2020}
}
@ARTICLE{qin2022ultrav2,
author={Qin, Zequn and Zhang, Pengyi and Li, Xi},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Ultra Fast Deep Lane Detection With Hybrid Anchor Driven Ordinal Classification},
year={2022},
volume={},
number={},
pages={1-14},
doi={10.1109/TPAMI.2022.3182097}
}
Thanks zchrissirhcz for the contribution to the compile tool of CULane, KopiSoftware for contributing to the speed test, and ustclbh for testing on the Windows platform.