Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Here is my step trying to reproduce CTRL,I want to know is there any wrong ? #161

Open
20210726 opened this issue Sep 21, 2023 · 12 comments
Labels
good first issue Good for newcomers

Comments

@20210726
Copy link

          Here is my step trying to reproduce CTRL,I want to know is there any wrong ?

especially step2,and Is the config file ‘fsd_base_vehicle.yaml’ correct?

1.prepare waymo data(I only use part of waymo dataset)
1.1 use my python script to generate train.txt val.txt test.txt and idx2timestamp.pkl idx2contextname.pkl
Then cp train.txt val.txt test.txt to ./data/waymo/kitti_format/ImageSets/
cp idx2timestamp.pkl idx2contextname.pkl to ./data/waymo/kitti_format/
1.2 python tools/create_data.py --dataset waymo --root-path ./data/waymo/ --out-dir ./data/waymo/ --workers 128 --extra-tag waymo
image
Step 1: Generate train_gt.bin once for all. (waymo bin format).
python ./tools/ctrl/generate_train_gt_bin.py
generate file 'train_gt.bin'
image
python ./tools/ctrl/extract_poses.py
Generate file context2timestamp.pkl and pose.pkl
image

Step 2: Use ImmortalTracker to generate tracking results in training split (bin file format)
modify file ego_info.py and time_stamp.py like this:
image
Modify file waymo_convert_detection.sh like this:
image
then:
bash preparedata/waymo/waymo_preparedata.sh ~/dataset/waymo/waymo_format/
generate files like this :
image

bash preparedata/waymo/waymo_convert_detection.sh ~/dataset/waymo/waymo_format/train_gt.bin CTRL_FSD_TTA
Generate files like this:
In data/waymo/training/detection/CTRL_FSD_TTA/dets:
image
Modify file run_mot.sh like this:
image

Then:
bash run_mot.sh
generate file like this:
image
Step 3: Generate track input for training
modify file ‘fsd_base_vehicle.yaml’ like this: pred.bin was generated in step 2.
image
python ./tools/ctrl/generate_track_input.py ./tools/ctrl/data_configs/fsd_base_vehicle.yaml --process 1
generate files like this:
image

Step 4: Assign candidates GT tracks
python ./tools/ctrl/generate_candidates.py ./tools/ctrl/data_configs/fsd_base_vehicle.yaml --process 1

image
Step 5: Begin training
bash tools/dist_train.sh configs/ctrl/ctrl_veh_24e.py 1 --no-validate

Originally posted by @20210726 in #132 (comment)

@Abyssaledge
Copy link
Collaborator

Thanks for your interesting and detailed post. I will check once I am free.

@rockywind
Copy link

@Abyssaledge @20210726
Hi, I follow the guide from the link,
I met the error when I train the model.

ries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
  File "/opt/conda/envs/sst/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/envs/sst/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
    cli.main()
  File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
    run()
  File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
    runpy.run_path(target, run_name="__main__")
  File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
    exec(code, run_globals)
  File "/SHFP12/xiaoquan.wang/01_bev/SST/tools/train_track.py", line 231, in <module>
    main()
  File "/SHFP12/xiaoquan.wang/01_bev/SST/tools/train_track.py", line 221, in main
    train_model(
  File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/apis/train.py", line 41, in train_model
    train_detector(
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmdet/apis/train.py", line 170, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 47, in train
    for i, data_batch in enumerate(self.data_loader):
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
    data = self._next_data()
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
    return self._process_data(data)
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
    data.reraise()
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
    raise self.exc_type(msg)
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
    data = fetcher.fetch(index)
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmdet/datasets/dataset_wrappers.py", line 151, in __getitem__
    return self.dataset[idx % self._ori_len]
  File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/datasets/waymo_tracklet_dataset.py", line 284, in __getitem__
    data = self.prepare_train_data(idx)
  File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/datasets/waymo_tracklet_dataset.py", line 218, in prepare_train_data
    example = transform(example)
  File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/datasets/pipelines/tracklet_pipelines.py", line 156, in __call__
    assert len(points_list) == len(tracklet) == len(pose_list)
AssertionError

This is my script.

DIR=ctrl
CONFIG=ctrl_veh_24e_demo
WORK=work_dirs
bash tools/dist_train.sh configs/$DIR/$CONFIG.py 4 --work-dir ./$WORK/$CONFIG/ --no-validate

@SakuraRiven
Copy link

@20210726 Greate introduction! The only thing I concern is the train_gt.bin in step2, which should be replaced with xxx.bin from our own detector when converting format for mot.

@Abyssaledge
Copy link
Collaborator

@20210726 Truly sorry for the late reply. I quickly go through your introduction. The pipeline is basically right, but one point need to be modified:
If you are generating the training data, you do not need the max_time_since_update: 10 in the tracking config, which will retain many boxes in the generated xx.bin file and is likely to lead to and oversize error of bin file.
max_time_since_update: 10 should be only adopted in generating training data.
Many thanks to all of you for the discussions!!! @rockywind @SakuraRiven @20210726

@SakuraRiven
Copy link

@Abyssaledge Are there some mistakes? 《If you are generating the training data, you do not need the xxx》《xxx should be only adopted in generating training data》

@Abyssaledge
Copy link
Collaborator

Abyssaledge commented Oct 22, 2023

@SakuraRiven The introduction above uses tracking config immortal_for_ctrl_keep_10.yaml to generate training data. This config enables max_time_since_update: 10 by default, which means we will keep adding virtual boxes (at most 10) if the tractor loses an object. This is not wrong but it may lead to too many boxes in the training set, greatly slowing the processing.
Thus I recommend disabling max_time_since_update: 10 (set it to 0) when generating the training data.

@Abyssaledge Abyssaledge added the good first issue Good for newcomers label Oct 22, 2023
@SakuraRiven
Copy link

SakuraRiven commented Oct 23, 2023

If you are generating the training data, you do not need the max_time_since_update: 10 in the tracking config, which will retain many boxes in the generated xx.bin file and is likely to lead to and oversize error of bin file.
max_time_since_update: 10 should be only adopted in generating training data.

@Abyssaledge I see. So here should be "max_time_since_update: 10 should be only adopted in generating val and test data"?

@SakuraRiven
Copy link

@Abyssaledge Another question. Do we have to perform extend_tracks.py for the training data generation? Considering that the tracklet anno in training set does not contain the first 10 frame, maybe it is not necessary?

@Abyssaledge
Copy link
Collaborator

No, you do not need extend_tracks.py for training. @SakuraRiven
Btw, if you use max_time_since_update > 0 or extend_tracks.py, please remember to remove the empty predictions. Otherwise, there might be some false positives, especially for pedestrian and cyclist.

@SimonSongg
Copy link

Thanks for your interesting and detailed post. I will check once I am free.

Hi, thanks for your great work!

I have a question when I tried to reproduce the result. I followed steps to generated predicted tracks input in step 3. And in step 4, I used train_gt.bin to assign bbox to the predicted track. But seems like the predicted track from previous steps was already with ego motion, but the object position in train_gt.bin not. So the assignment results was weird (very low Average candidates per trk and very high Tracklet FP rate). I am wondering how to add ego motion into train_gt.bin so that the assignment could be correct or I did something wrong?

Thanks in advance!

@shrek-fu
Copy link

shrek-fu commented Apr 17, 2024

@20210726 hi, Which version and slice of waymo data ,and i will reproduce ctrl result by your pipeline.

@xhjsdx
Copy link

xhjsdx commented Dec 17, 2024

@Abyssaledge @20210726 Thanks for your discussion!
In Step2, how can i get the CTRL_FSD_TTA?
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

7 participants