Skip to content

Commit

Permalink
Merge branch 'main' into ruilong/buildtest
Browse files Browse the repository at this point in the history
  • Loading branch information
vye16 authored Oct 13, 2023
2 parents 62e83dc + ee9f1b6 commit 0c22860
Show file tree
Hide file tree
Showing 5 changed files with 159 additions and 131 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/doc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,5 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: _build/
force_orphan: true
cname: docs.gsplat.studio
if: github.event_name != 'pull_request'
# cname: ...
140 changes: 18 additions & 122 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,143 +1,39 @@
# gsplat

Our version of differentiable gaussian rasterizer
[![Core Tests.](https://github.com/nerfstudio-project/gsplat/actions/workflows/core_tests.yml/badge.svg?branch=main)](https://github.com/nerfstudio-project/gsplat/actions/workflows/core_tests.yml)
[![Docs](https://github.com/nerfstudio-project/gsplat/actions/workflows/doc.yml/badge.svg?branch=main)](https://github.com/nerfstudio-project/gsplat/actions/workflows/doc.yml)

## Installation

Clone the repository and submodules with

```bash
git clone --recurse-submodules URL
```

For CUDA development, it is recommend to install with `BUILD_NO_CUDA=1`, which
will disable compiling during pip install, and instead use JIT compiling on your
first run. The benefit of JIT compiling is that it does incremental compiling as
you modify your cuda code so it is much faster than re-compile through pip. Note
the JIT compiled library can be found under `~/.cache/torch_extensions/py*-cu*/`.
[http://www.gsplat.studio/](http://www.gsplat.studio/)

```bash
BUILD_NO_CUDA=1 pip install -e .[dev]
```

If you won't touch the underlying CUDA code, you can just install with compiling:

```bash
pip install -e .[dev]
```
gsplat is an open-source library for CUDA accelerated rasterization of gaussians with python bindings. It is inspired by the SIGGRAPH paper [3D Gaussian Splatting for Real-Time Rendering of Radiance Fields](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/). This libary contains the neccessary components for efficient 3D to 2D projection, sorting, and alpha compositing of gaussians and their associated backward passes for inverse rendering.

## Development
![Teaser](/docs/source/imgs/training.gif?raw=true)

## Protect Main Branch over Pull Request.

It is recommended to commit the code into the main branch as a PR over a hard push, as the PR would protect the main branch if the code break tests but a hard push won't. Also squash the commits before merging the PR so it won't span the git history.

The curret tests that will be triggered by PR:
## Installation

- `.github/workflows/core_tests.yml`: Black formating. Pytests.
- `.github/workflows/doc.yml`: Doc build.
**Dependence**: Please install [Pytorch](https://pytorch.org/get-started/locally/) first.

Because we check for black formatting, it is recommend to run black before commit in the code:
The easist way is to install from PyPI. In this way it will build the CUDA code **on the first run** (JIT).

```bash
black . gsplat/ tests/ examples/
pip install gsplat
```

Since there is no GPU supported on github workflow container, we don't test against those cuda unit tests under `tests/` in PR. So it is recommended to check test pass locally before committing:
Or install from source. In this way it will build the CUDA code during installation.

```bash
pytest tests/ # check for all tests
pytest tests/test_cov2d_bounds.py # check for a single test file.
pip install git+https://github.com/nerfstudio-project/gsplat.git
```

Note that `pytest` recognizes and runs all functions named as `test_*`, so you should name the
test functions in this pattern. See `test_cov2d_bounds.py` as an example.


## Build the Doc Locally
If you want to contribute to the doc, here is the way to build it locally. The source code of the doc can be found in `docs/source` and the built doc will be in `_build/`. If you are interested in contributing with the doc, here are some examples on documentation: [viser](https://github.com/nerfstudio-project/viser/tree/main/docs/source), [nerfstudio](https://github.com/nerfstudio-project/nerfstudio/tree/main/docs), [nerfacc](https://github.com/KAIR-BAIR/nerfacc/tree/master/docs/source).

```
pip install -e .[dev]
pip install -r docs/requirements.txt
sphinx-build docs/source _build
```



# Brief walkthrough
## Examples

The main python bindings for rasterization are found by importing gsplat
Fit a 2D image with 3D Gaussians.

```
import gsplat
help(gsplat)
```

# clangd setup (for Neovim)

[clangd](https://clangd.llvm.org/) is a nice tool for providing completions,
type checking, and other helpful features in C++. It requires some extra effort
to get set up for CUDA development, but there are fortunately only three steps
here.

**First,** we should install a `clangd` extension for our IDE/editor.

For Neovim+lspconfig users, this is very easy, we can simply install `clangd`
via Mason and add a few setup lines in Lua:

```lua
require("lspconfig").clangd.setup{
capabilities = capabilities
}
```

**Second,** we need to generate a `.clangd` configuration file with the current
CUDA path argument.

Make sure you're in the right environment (with CUDA installed), and then from
the root of the repository, you can run:

```sh
echo "# Autogenerated, see .clangd_template\!" > .clangd && sed -e "/^#/d" -e "s|YOUR_CUDA_PATH|$(dirname $(dirname $(which nvcc)))|" .clangd_template >> .clangd
```

**Third,** we'll need a
[`compile_comands.json`](https://clang.llvm.org/docs/JSONCompilationDatabase.html)
file.

If we're working on PyTorch bindings, one option is to generate this using
[`bear`](https://github.com/rizsotto/Bear):

```sh
sudo apt update
sudo apt install bear

# From the repository root, 3dgs-exercise/.
#
# This will save a file at 3dgs-exercise/compile_commands.json, which clangd
# should be able to detect.
bear -- pip install -e gsplat/

# Make sure the file is not empty!
cat compile_commands.json
```

Alternatively: if we're working directly in C (and don't need any PyTorch
binding stuff), we can generate via CMake:

```sh
# From 3dgs-exercise/csrc/build.
#
# This will save a file at 3dgs-exercise/csrc/build/compile_commands.json, which
# clangd should be able to detect.
cmake .. -DCMAKE_EXPORT_COMPILE_COMMANDS=on

# Make sure the file is not empty!
cat compile_commands.json
```bash
pip install -r examples/requirements.txt
python examples/simple_trainer.py
```

**Known issues**
## Development and Contribution

- The torch extensions include currently raises an error:
`In included file: use of undeclared identifier 'noinline'; did you mean 'inline'?`
This repository was born from the curiosity of people on the Nerfstudio team trying to understand a new rendering technique. We welcome contributions of any kind and are open to feedback, bug-reports, and improvements to help expand the capabilities of this software. Please check [docs/DEV.md](docs/DEV.md) for more info about development.
129 changes: 129 additions & 0 deletions docs/DEV.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Development

## Installation

Clone the repository and submodules with

```bash
git clone --recurse-submodules URL
```

For CUDA development, it is recommend to install with `BUILD_NO_CUDA=1`, which
will disable compiling during pip install, and instead use JIT compiling on your
first run. The benefit of JIT compiling is that it does incremental compiling as
you modify your cuda code so it is much faster than re-compile through pip. Note
the JIT compiled library can be found under `~/.cache/torch_extensions/py*-cu*/`.

```bash
BUILD_NO_CUDA=1 pip install -e .[dev]
```

If you won't touch the underlying CUDA code, you can just install with compiling:

```bash
pip install -e .[dev]
```

## Protect Main Branch over Pull Request

It is recommended to commit the code into the main branch as a PR over a hard push, as the PR would protect the main branch if the code break tests but a hard push won't. Also squash the commits before merging the PR so it won't span the git history.

The curret tests that will be triggered by PR:

- `.github/workflows/core_tests.yml`: Black formating. Pytests.
- `.github/workflows/doc.yml`: Doc build.

Because we check for black formatting, it is recommend to run black before commit in the code:

```bash
black . gsplat/ tests/ examples/
```

Since there is no GPU supported on github workflow container, we don't test against those cuda unit tests under `tests/` in PR. So it is recommended to check test pass locally before committing:

```bash
pytest tests/ # check for all tests
pytest tests/test_cov2d_bounds.py # check for a single test file.
```

Note that `pytest` recognizes and runs all functions named as `test_*`, so you should name the
test functions in this pattern. See `test_cov2d_bounds.py` as an example.

## Build the Doc Locally

If you want to contribute to the doc, here is the way to build it locally. The source code of the doc can be found in `docs/source` and the built doc will be in `_build/`. If you are interested in contributing with the doc, here are some examples on documentation: [viser](https://github.com/nerfstudio-project/viser/tree/main/docs/source), [nerfstudio](https://github.com/nerfstudio-project/nerfstudio/tree/main/docs), [nerfacc](https://github.com/KAIR-BAIR/nerfacc/tree/master/docs/source).

```bash
pip install -e .[dev]
pip install -r docs/requirements.txt
sphinx-build docs/source _build
```

## Clangd setup (for Neovim)

[clangd](https://clangd.llvm.org/) is a nice tool for providing completions,
type checking, and other helpful features in C++. It requires some extra effort
to get set up for CUDA development, but there are fortunately only three steps
here.

**First,** we should install a `clangd` extension for our IDE/editor.

For Neovim+lspconfig users, this is very easy, we can simply install `clangd`
via Mason and add a few setup lines in Lua:

```lua
require("lspconfig").clangd.setup{
capabilities = capabilities
}
```

**Second,** we need to generate a `.clangd` configuration file with the current
CUDA path argument.

Make sure you're in the right environment (with CUDA installed), and then from
the root of the repository, you can run:

```sh
echo "# Autogenerated, see .clangd_template\!" > .clangd && sed -e "/^#/d" -e "s|YOUR_CUDA_PATH|$(dirname $(dirname $(which nvcc)))|" .clangd_template >> .clangd
```

**Third,** we'll need a
[`compile_comands.json`](https://clang.llvm.org/docs/JSONCompilationDatabase.html)
file.

If we're working on PyTorch bindings, one option is to generate this using
[`bear`](https://github.com/rizsotto/Bear):

```sh
sudo apt update
sudo apt install bear

# From the repository root, 3dgs-exercise/.
#
# This will save a file at 3dgs-exercise/compile_commands.json, which clangd
# should be able to detect.
bear -- pip install -e gsplat/

# Make sure the file is not empty!
cat compile_commands.json
```

Alternatively: if we're working directly in C (and don't need any PyTorch
binding stuff), we can generate via CMake:

```sh
# From 3dgs-exercise/csrc/build.
#
# This will save a file at 3dgs-exercise/csrc/build/compile_commands.json, which
# clangd should be able to detect.
cmake .. -DCMAKE_EXPORT_COMPILE_COMMANDS=on

# Make sure the file is not empty!
cat compile_commands.json
```

<!--
**Known issues**
- The torch extensions include currently raises an error:
`In included file: use of undeclared identifier 'noinline'; did you mean 'inline'?` -->
3 changes: 3 additions & 0 deletions examples/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
numpy
tyro
Pillow
16 changes: 8 additions & 8 deletions gsplat/cuda/csrc/sh.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,23 @@

namespace cg = cooperative_groups;

__host__ __device__ const float SH_C0 = 0.28209479177387814f;
__host__ __device__ const float SH_C1 = 0.4886025119029199f;
__host__ __device__ const float SH_C2[] = {
__device__ const float SH_C0 = 0.28209479177387814f;
__device__ const float SH_C1 = 0.4886025119029199f;
__device__ const float SH_C2[] = {
1.0925484305920792f,
-1.0925484305920792f,
0.31539156525252005f,
-1.0925484305920792f,
0.5462742152960396f};
__host__ __device__ const float SH_C3[] = {
__device__ const float SH_C3[] = {
-0.5900435899266435f,
2.890611442640554f,
-0.4570457994644658f,
0.3731763325901154f,
-0.4570457994644658f,
1.445305721320277f,
-0.5900435899266435f};
__host__ __device__ const float SH_C4[] = {
__device__ const float SH_C4[] = {
2.5033429417967046f,
-1.7701307697799304,
0.9461746957575601f,
Expand All @@ -30,7 +30,7 @@ __host__ __device__ const float SH_C4[] = {
-1.7701307697799304f,
0.6258357354491761f};

__host__ __device__ unsigned num_sh_bases(const unsigned degree) {
__device__ unsigned num_sh_bases(const unsigned degree) {
if (degree == 0)
return 1;
if (degree == 1)
Expand All @@ -43,7 +43,7 @@ __host__ __device__ unsigned num_sh_bases(const unsigned degree) {
}

template <int CHANNELS>
__host__ __device__ void sh_coeffs_to_color(
__device__ void sh_coeffs_to_color(
const unsigned degree,
const float3 &viewdir,
const float *coeffs,
Expand Down Expand Up @@ -118,7 +118,7 @@ __host__ __device__ void sh_coeffs_to_color(
}

template <int CHANNELS>
__host__ __device__ void sh_coeffs_to_color_vjp(
__device__ void sh_coeffs_to_color_vjp(
const unsigned degree,
const float3 &viewdir,
const float *v_colors,
Expand Down

0 comments on commit 0c22860

Please sign in to comment.