diff --git a/EXPLORATION.md b/EXPLORATION.md index e8698ab75..91fd54b2f 100644 --- a/EXPLORATION.md +++ b/EXPLORATION.md @@ -26,7 +26,7 @@ | `--absgrad --grow_grad2d 2e-4` | 8m30s | 0.018s/im | 2.21 GB | 0.6251 | 20.68 | 0.587 | 0.89M | | `--absgrad --grow_grad2d 2e-4` (30k) | -- | 0.030s/im | 5.25 GB | 0.7442 | 24.12 | 0.291 | 2.62M | -Note: default args means running `CUDA_VISIBLE_DEVICES=0 python simple_trainer.py --data_dir ` with: +Note: default args means running `CUDA_VISIBLE_DEVICES=0 python simple_trainer.py default --data_dir ` with: - Garden ([Source](https://jonbarron.info/mipnerf360/)): `--result_dir results/garden` - U1 (a.k.a University 1 from [Source](https://localrf.github.io/)): `--result_dir results/u1 --data_factor 1 --grow_scale3d 0.001` diff --git a/docs/source/examples/colmap.rst b/docs/source/examples/colmap.rst index 13eb344f1..b6ae6eb3a 100644 --- a/docs/source/examples/colmap.rst +++ b/docs/source/examples/colmap.rst @@ -3,7 +3,7 @@ Fit a COLMAP Capture .. currentmodule:: gsplat -The :code:`examples/simple_trainer.py` script allows you train a +The :code:`examples/simple_trainer.py default` script allows you train a `3D Gaussian Splatting `_ model for novel view synthesis, on a COLMAP processed capture. This script follows the exact same logic with the `official implementation @@ -15,7 +15,7 @@ Simply run the script under `examples/`: .. code-block:: bash - CUDA_VISIBLE_DEVICES=0 python simple_trainer.py \ + CUDA_VISIBLE_DEVICES=0 python simple_trainer.py default \ --data_dir data/360_v2/garden/ --data_factor 4 \ --result_dir ./results/garden diff --git a/docs/source/examples/large_scale.rst b/docs/source/examples/large_scale.rst index 46db0bc49..3288eb512 100644 --- a/docs/source/examples/large_scale.rst +++ b/docs/source/examples/large_scale.rst @@ -35,7 +35,7 @@ The code for this example can be found under `examples/`: .. code-block:: bash # First train a 3DGS model - CUDA_VISIBLE_DEVICES=0 python simple_trainer.py \ + CUDA_VISIBLE_DEVICES=0 python simple_trainer.py default \ --data_dir data/360_v2/garden/ --data_factor 4 \ --result_dir ./results/garden diff --git a/docs/source/tests/eval.rst b/docs/source/tests/eval.rst index 486cb39b3..9f2a2fdc0 100644 --- a/docs/source/tests/eval.rst +++ b/docs/source/tests/eval.rst @@ -17,7 +17,7 @@ Evaluation | gsplat-30k (4 GPUs) | 28.91 | 0.871 | 0.135 | **2.0 GB** | **11m28s** | +---------------------+-------+-------+-------+------------------+------------+ -This repo comes with a standalone script (:code:`examples/simple_trainer.py`) that reproduces +This repo comes with a standalone script (:code:`examples/simple_trainer.py default`) that reproduces the `Gaussian Splatting `_ with exactly the same performance on PSNR, SSIM, LPIPS, and converged number of Gaussians. Powered by `gsplat`'s efficient CUDA implementation, the training takes up to diff --git a/examples/benchmarks/basic.sh b/examples/benchmarks/basic.sh index e804285dc..0c72c0c07 100644 --- a/examples/benchmarks/basic.sh +++ b/examples/benchmarks/basic.sh @@ -11,14 +11,14 @@ do echo "Running $SCENE" # train without eval - CUDA_VISIBLE_DEVICES=0 python simple_trainer.py --eval_steps -1 --disable_viewer --data_factor $DATA_FACTOR \ + CUDA_VISIBLE_DEVICES=0 python simple_trainer.py default --eval_steps -1 --disable_viewer --data_factor $DATA_FACTOR \ --data_dir data/360_v2/$SCENE/ \ --result_dir $RESULT_DIR/$SCENE/ # run eval and render for CKPT in $RESULT_DIR/$SCENE/ckpts/*; do - CUDA_VISIBLE_DEVICES=0 python simple_trainer.py --disable_viewer --data_factor $DATA_FACTOR \ + CUDA_VISIBLE_DEVICES=0 python simple_trainer.py default --disable_viewer --data_factor $DATA_FACTOR \ --data_dir data/360_v2/$SCENE/ \ --result_dir $RESULT_DIR/$SCENE/ \ --ckpt $CKPT diff --git a/examples/benchmarks/basic_4gpus.sh b/examples/benchmarks/basic_4gpus.sh index 3c3ad334e..523421609 100644 --- a/examples/benchmarks/basic_4gpus.sh +++ b/examples/benchmarks/basic_4gpus.sh @@ -11,7 +11,7 @@ do echo "Running $SCENE" # train and eval at the last step - CUDA_VISIBLE_DEVICES=0,1,2,3 python simple_trainer.py --eval_steps -1 --disable_viewer --data_factor $DATA_FACTOR \ + CUDA_VISIBLE_DEVICES=0,1,2,3 python simple_trainer.py default --eval_steps -1 --disable_viewer --data_factor $DATA_FACTOR \ # 4 GPUs is effectively 4x batch size so we scale down the steps by 4x as well. # "--packed" reduces the data transfer between GPUs, which leads to faster training. --steps_scaler 0.25 --packed \