-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: 976c93f921db1af3ea1c43fcf75fb386 | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
docs.nerf.studio |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,120 @@ | ||
# Customizable configs | ||
|
||
Our dataclass configs allow you to easily plug in different permutations of models, dataloaders, modules, etc. | ||
and modify all parameters from a typed CLI supported by [tyro](https://pypi.org/project/tyro/). | ||
|
||
### Base components | ||
|
||
All basic, reusable config components can be found in `nerfstudio/configs/base_config.py`. The `Config` class at the bottom of the file is the upper-most config level and stores all of the sub-configs needed to get started with training. | ||
|
||
You can browse this file and read the attribute annotations to see what configs are available and what each specifies. | ||
|
||
### Creating new configs | ||
|
||
If you are interested in creating a brand new model or data format, you will need to create a corresponding config with associated parameters you want to expose as configurable. | ||
|
||
Let's say you want to create a new model called Nerfacto. You can create a new `Model` class that extends the base class as described [here](pipelines/models.ipynb). Before the model definition, you define the actual `NerfactoModelConfig` which points to the `NerfactoModel` class (make sure to wrap the `_target` classes in a `field` as shown below). | ||
|
||
:::{admonition} Tip | ||
:class: info | ||
|
||
You can then enable type/auto complete on the config passed into the `NerfactoModel` by specifying the config type below the class definition. | ||
::: | ||
|
||
```python | ||
"""nerfstudio/models/nerfacto.py""" | ||
|
||
@dataclass | ||
class NerfactoModelConfig(ModelConfig): | ||
"""Nerfacto Model Config""" | ||
|
||
_target: Type = field(default_factory=lambda: NerfactoModel) | ||
... | ||
|
||
class NerfactoModel(Model): | ||
"""Nerfacto model | ||
|
||
Args: | ||
config: Nerfacto configuration to instantiate model | ||
""" | ||
|
||
config: NerfactoModelConfig | ||
... | ||
``` | ||
|
||
The same logic applies to all other custom configs you want to create. For more examples, you can see `nerfstudio/data/dataparsers/nerfstudio_dataparsers.py`, `nerfstudio/data/datamanagers.py`. | ||
|
||
:::{admonition} See Also | ||
:class: seealso | ||
|
||
For how to create the actual data and model classes that follow the configs, please refer to [pipeline overview](pipelines/index.rst). | ||
::: | ||
|
||
### Updating method configs | ||
|
||
If you are interested in creating a new model config, you will have to modify the `nerfstudio/configs/method_configs.py` file. This is where all of the configs for implemented models are housed. You can browse this file to see how we construct various existing models by modifying the `Config` class and specifying new or modified default components. | ||
|
||
For instance, say we created a brand new model called Nerfacto that has an associated `NerfactoModelConfig`, we can specify the following new Config by overriding the pipeline and optimizers attributes appropriately. | ||
|
||
```python | ||
"""nerfstudio/configs/method_configs.py""" | ||
|
||
method_configs["nerfacto"] = Config( | ||
method_name="nerfacto", | ||
pipeline=VanillaPipelineConfig( | ||
model=NerfactoModelConfig(eval_num_rays_per_chunk=1 << 14), | ||
), | ||
optimizers={ | ||
"proposal_networks": { | ||
"optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15), | ||
"scheduler": None, | ||
}, | ||
"fields": { | ||
"optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15), | ||
"scheduler": None, | ||
}, | ||
}, | ||
) | ||
``` | ||
|
||
After placing your new `Config` class into the `method_configs` dictionary, you can provide a description for the model by updating the `descriptions` dictionary at the top of the file. | ||
|
||
### Modifying from CLI | ||
|
||
Often times, you just want to play with the parameters of an existing model without having to specify a new one. You can easily do so via CLI. Below, we showcase some useful CLI commands. | ||
|
||
- List out all existing models | ||
|
||
```bash | ||
ns-train --help | ||
``` | ||
|
||
- List out all existing configurable parameters for `{METHOD_NAME}` | ||
|
||
```bash | ||
ns-train {METHOD_NAME} --help | ||
``` | ||
|
||
- Change the train/eval dataset | ||
|
||
```bash | ||
ns-train {METHOD_NAME} --data DATA_PATH | ||
``` | ||
|
||
- Enable the viewer | ||
|
||
```bash | ||
ns-train {METHOD_NAME} --vis viewer | ||
``` | ||
|
||
- See what options are available for the specified dataparser (e.g. blender-data) | ||
|
||
```bash | ||
ns-train {METHOD_NAME} {DATA_PARSER} --help | ||
``` | ||
|
||
- Run with changed dataparser attributes and viewer on | ||
```bash | ||
# NOTE: the dataparser and associated configurations go at the end of the command | ||
ns-train {METHOD_NAME} --vis viewer {DATA_PARSER} --scale-factor 0.5 | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,93 @@ | ||
# Benchmarking workflow | ||
|
||
We make it easy to benchmark your new NeRF against the standard Blender dataset. | ||
|
||
## Launching training on Blender dataset | ||
|
||
To start, you will need to train your NeRF on each of the blender objects. | ||
To launch training jobs automatically on each of these items, you can call: | ||
|
||
```bash | ||
|
||
./nerfstudio/scripts/benchmarking/launch_train_blender.sh -m {METHOD_NAME} [-s] [-v {VIS}] [{GPU_LIST}] | ||
``` | ||
|
||
Simply replace the arguments in brackets with the correct arguments. | ||
|
||
- `-m {METHOD_NAME}`: Name of the method you want to benchmark (e.g. `nerfacto`, `mipnerf`). | ||
- `-s`: Launch a single job per GPU. | ||
- `-v {VIS}`: Use another visualization than wandb, which is the default. Other options are comet & tensorboard. | ||
- `{GPU_LIST}`: (optional) Specify the list of gpus you want to use on your machine space separated. for instance, if you want to use GPU's 0-3, you will need to pass in `0 1 2 3`. If left empty, the script will automatically find available GPU's and distribute training jobs on the available GPUs. | ||
|
||
:::{admonition} Tip | ||
:class: info | ||
|
||
To view all the arguments and annotations, you can run `./nerfstudio/scripts/benchmarking/launch_train_blender.sh --help` | ||
::: | ||
|
||
A full example would be: | ||
|
||
- Specifying gpus | ||
|
||
```bash | ||
./nerfstudio/scripts/benchmarking/launch_train_blender.sh -m nerfacto 0 1 2 3 | ||
``` | ||
|
||
- Automatically find available gpus | ||
```bash | ||
./nerfstudio/scripts/benchmarking/launch_train_blender.sh -m nerfacto | ||
``` | ||
|
||
The script will automatically launch training on all of the items and save the checkpoints in an output directory with the experiment name and current timestamp. | ||
|
||
## Evaluating trained Blender models | ||
|
||
Once you have launched training, and training converges, you can test your method with `nerfstudio/scripts/benchmarking/launch_eval_blender.sh`. | ||
|
||
Say we ran a benchmark on 08-10-2022 for `instant-ngp`. By default, the train script will save the benchmarks in the following format: | ||
|
||
``` | ||
outputs | ||
└───blender_chair_2022-08-10 | ||
| └───instant-ngp | ||
| └───2022-08-10_172517 | ||
| └───config.yml | ||
| ... | ||
└───blender_drums_2022-08-10 | ||
| └───instant-ngp | ||
| └───2022-08-10_172517 | ||
| └───config.yml | ||
| ... | ||
... | ||
``` | ||
|
||
If we wanted to run the benchmark on all the blender data for the above example, we would run: | ||
|
||
```bash | ||
|
||
./nerfstudio/scripts/benchmarking/launch_eval_blender.sh -m instant-ngp -o outputs/ -t 2022-08-10_172517 [{GPU_LIST}] | ||
``` | ||
|
||
The flags used in the benchmarking script are defined as follows: | ||
|
||
- `-m`: config name (e.g. `instant-ngp`). This should be the same as what was passed in for -c in the train script. | ||
- `-o`: base output directory for where all of the benchmarks are stored (e.g. `outputs/`). Corresponds to the `--output-dir` in the base `Config` for training. | ||
- `-t`: timestamp of benchmark; also the identifier (e.g. `2022-08-10_172517`). | ||
- `-s`: Launch a single job per GPU. | ||
- `{GPU_LIST}`: (optional) Specify the list of gpus you want to use on your machine space separated. For instance, if you want to use GPU's 0-3, you will need to pass in `0 1 2 3`. If left empty, the script will automatically find available GPU's and distribute evaluation jobs on the available GPUs. | ||
|
||
The script will simultaneously run the benchmarking across all the objects in the blender dataset and calculates the PSNR/FPS/other stats. The results are saved as .json files in the `-o` directory with the following format: | ||
|
||
``` | ||
outputs | ||
└───instant-ngp | ||
| └───blender_chair_2022-08-10_172517.json | ||
| | blender_ficus_2022-08-10_172517.json | ||
| | ... | ||
``` | ||
|
||
:::{admonition} Warning | ||
:class: warning | ||
|
||
Since we are running multiple backgrounded processes concurrently with this script, please note the terminal logs may be messy. | ||
::: |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
Debugging tools | ||
==================== | ||
|
||
We document a few of the supported tooling systems and pipelines we support for debugging our models (e.g. profiling to debug speed). | ||
As we grow, we hope to provide more updated and extensive tooling support. | ||
|
||
.. toctree:: | ||
:maxdepth: 1 | ||
|
||
local_logger | ||
profiling | ||
benchmarking |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
# Local writer | ||
|
||
The `LocalWriter` simply outputs numerical stats to the terminal. | ||
You can specify additional parameters to customize your logging experience. | ||
A skeleton of the local writer config is defined below. | ||
|
||
```python | ||
"""nerfstudio/configs/base_config.py"""" | ||
|
||
@dataclass | ||
class LocalWriterConfig(InstantiateConfig): | ||
"""Local Writer config""" | ||
|
||
_target: Type = writer.LocalWriter | ||
enable: bool = False | ||
stats_to_track: Tuple[writer.EventName, ...] = ( | ||
writer.EventName.ITER_TRAIN_TIME, | ||
... | ||
) | ||
max_log_size: int = 10 | ||
|
||
``` | ||
|
||
You can customize the local writer by editing the attributes: | ||
- `enable`: enable/disable the logger. | ||
- `stats_to_track`: all the stats that you want to print to the terminal (see list under `EventName` in `utils/writer.py`). You can add or remove any of the defined enums. | ||
- `max_log_size`: how much content to print onto the screen (By default, only print 10 lines onto the screen at a time). If 0, will print everything without deleting any previous lines. | ||
|
||
:::{admonition} Tip | ||
:class: info | ||
|
||
If you want to create a new stat to track, simply add the stat name to the `EventName` enum. | ||
- Remember to call some put event (e.g. `put_scalar` from `utils/writer.py` to place the value in the `EVENT_STORAGE`. | ||
- Remember to add the new enum to the `stats_to_track` list | ||
::: | ||
|
||
The local writer is easily configurable via CLI. | ||
A few common commands to use: | ||
|
||
- Disable local writer | ||
```bash | ||
ns-train {METHOD_NAME} --logging.local-writer.no-enable | ||
``` | ||
|
||
- Disable line wrapping | ||
```bash | ||
ns-train {METHOD_NAME} --logging.local-writer.max-log-size=0 | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,86 @@ | ||
# Code profiling support | ||
|
||
We provide built-in performance profiling capabilities to make it easier for you to debug and assess the performance of your code. | ||
|
||
#### In-house profiler | ||
|
||
You can use our built-in profiler. By default, it is enabled and will print at the termination of the program. You can disable it via CLI using the flag `--logging.no-enable-profiler`. | ||
|
||
|
||
The profiler computes the average total time of execution for any function with the `@profiler.time_function` decorator. | ||
For instance, if you wanted to profile the total time it takes to generate rays given pixel and camera indices via the `RayGenerator` class, you might want to time its `forward()` function. In that case, you would need to add the decorator to the function. | ||
|
||
```python | ||
"""nerfstudio/model_components/ray_generators.py"""" | ||
|
||
class RayGenerator(nn.Module): | ||
|
||
... | ||
|
||
@profiler.time_function # <-- add the profiler decorator before the function | ||
def forward(self, ray_indices: TensorType["num_rays", 3]) -> RayBundle: | ||
# implementation here | ||
... | ||
``` | ||
|
||
Alternatively, you can also time parts of the code: | ||
```python | ||
... | ||
|
||
def forward(self, ray_indices: TensorType["num_rays", 3]) -> RayBundle: | ||
# implementation here | ||
with profiler.time_function("code1"): | ||
# some code here | ||
... | ||
|
||
with profiler.time_function("code2"): | ||
# some code here | ||
... | ||
... | ||
``` | ||
|
||
|
||
At termination of training or end of the training run, the profiler will print out the average execution time for all of the functions or code blocks that have the profiler tag. | ||
|
||
:::{admonition} Tip | ||
:class: info | ||
|
||
Use this profiler if there are *specific/individual functions* you want to measure the times for. | ||
::: | ||
|
||
|
||
#### Profiling with PyTorch profiler | ||
|
||
If you want to profile the training or evaluation code and track the memory and CUDA kernel launches, consider using [PyTorch profiler](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html). | ||
It will run the profiler for some selected step numbers, once with `CUDA_LAUNCH_BLOCKING=1`, once with `CUDA_LAUNCH_BLOCKING=0`. | ||
The PyTorch profiler can be enabled with `--logging.profiler=pytorch` flag. | ||
The outputs of the profiler are trace files stored in `{PATH_TO_MODEL_OUTPUT}/profiler_traces`, and can be loaded in Google Chrome by typing `chrome://tracing`. | ||
|
||
|
||
#### Profiling with PySpy | ||
|
||
If you want to profile the entire codebase, consider using [PySpy](https://github.com/benfred/py-spy). | ||
|
||
Install PySpy | ||
|
||
```bash | ||
pip install py-spy | ||
``` | ||
|
||
To perform the profiling, you can either specify that you want to generate a flame graph or generate a live-view of the profiler. | ||
|
||
- flame graph: with wandb logging and our inhouse logging disabled | ||
```bash | ||
program="ns-train nerfacto -- --vis=wandb --logging.no-enable-profiler blender-data" | ||
py-spy record -o {PATH_TO_OUTPUT_SVG} $program | ||
``` | ||
- top-down stats: running same program configuration as above | ||
```bash | ||
py-spy top $program | ||
``` | ||
|
||
:::{admonition} Attention | ||
:class: attention | ||
|
||
In defining `program`, you will need to add an extra `--` before you specify your program's arguments. | ||
::: |