Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup README, Move some instructions to docs/DEV.md #33

Merged
merged 3 commits into from
Oct 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# This workflows will upload a Python Package using twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

name: Upload Python Package

on:
release:
types: [created]
branches: [main]

jobs:
deploy:
runs-on: ubuntu-latest
environment: production

steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Install dependencies
run: |
python -m pip install build twine
- name: Strip unsupported tags in README
run: |
sed -i '/<!-- pypi-strip -->/,/<!-- \/pypi-strip -->/d' README.md
- name: Build and publish
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
run: |
BUILD_NO_CUDA=1 python -m build
twine upload --username __token__ --password $PYPI_TOKEN dist/*
140 changes: 18 additions & 122 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,143 +1,39 @@
# gsplat

Our version of differentiable gaussian rasterizer
[![Core Tests.](https://github.com/nerfstudio-project/gsplat/actions/workflows/core_tests.yml/badge.svg?branch=main)](https://github.com/nerfstudio-project/gsplat/actions/workflows/core_tests.yml)
[![Docs](https://github.com/nerfstudio-project/gsplat/actions/workflows/doc.yml/badge.svg?branch=main)](https://github.com/nerfstudio-project/gsplat/actions/workflows/doc.yml)

## Installation

Clone the repository and submodules with

```bash
git clone --recurse-submodules URL
```

For CUDA development, it is recommend to install with `BUILD_NO_CUDA=1`, which
will disable compiling during pip install, and instead use JIT compiling on your
first run. The benefit of JIT compiling is that it does incremental compiling as
you modify your cuda code so it is much faster than re-compile through pip. Note
the JIT compiled library can be found under `~/.cache/torch_extensions/py*-cu*/`.
[https://nerfstudio-project.github.io/gsplat/](https://nerfstudio-project.github.io/gsplat/)

```bash
BUILD_NO_CUDA=1 pip install -e .[dev]
```

If you won't touch the underlying CUDA code, you can just install with compiling:

```bash
pip install -e .[dev]
```
gsplat is an open-source library for CUDA accelerated rasterization of gaussians with python bindings. It is inspired by the SIGGRAPH paper [3D Gaussian Splatting for Real-Time Rendering of Radiance Fields](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/). This libary contains the neccessary components for efficient 3D to 2D projection, sorting, and alpha compositing of gaussians and their associated backward passes for inverse rendering.

## Development
![Teaser](/docs/source/imgs/training.gif?raw=true)

## Protect Main Branch over Pull Request.

It is recommended to commit the code into the main branch as a PR over a hard push, as the PR would protect the main branch if the code break tests but a hard push won't. Also squash the commits before merging the PR so it won't span the git history.

The curret tests that will be triggered by PR:
## Installation

- `.github/workflows/core_tests.yml`: Black formating. Pytests.
- `.github/workflows/doc.yml`: Doc build.
**Dependence**: Please install [Pytorch](https://pytorch.org/get-started/locally/) first.

Because we check for black formatting, it is recommend to run black before commit in the code:
The easist way is to install from PyPI. In this way it will build the CUDA code **on the first run** (JIT).

```bash
black . gsplat/ tests/ examples/
pip install gsplat
```

Since there is no GPU supported on github workflow container, we don't test against those cuda unit tests under `tests/` in PR. So it is recommended to check test pass locally before committing:
Or install from source. In this way it will build the CUDA code during installation.

```bash
pytest tests/ # check for all tests
pytest tests/test_cov2d_bounds.py # check for a single test file.
pip install git+https://github.com/nerfstudio-project/gsplat.git
```

Note that `pytest` recognizes and runs all functions named as `test_*`, so you should name the
test functions in this pattern. See `test_cov2d_bounds.py` as an example.


## Build the Doc Locally
If you want to contribute to the doc, here is the way to build it locally. The source code of the doc can be found in `docs/source` and the built doc will be in `_build/`. If you are interested in contributing with the doc, here are some examples on documentation: [viser](https://github.com/nerfstudio-project/viser/tree/main/docs/source), [nerfstudio](https://github.com/nerfstudio-project/nerfstudio/tree/main/docs), [nerfacc](https://github.com/KAIR-BAIR/nerfacc/tree/master/docs/source).

```
pip install -e .[dev]
pip install -r docs/requirements.txt
sphinx-build docs/source _build
```



# Brief walkthrough
## Examples

The main python bindings for rasterization are found by importing gsplat
Fit a 2D image with 3D Gaussians.

```
import gsplat
help(gsplat)
```

# clangd setup (for Neovim)

[clangd](https://clangd.llvm.org/) is a nice tool for providing completions,
type checking, and other helpful features in C++. It requires some extra effort
to get set up for CUDA development, but there are fortunately only three steps
here.

**First,** we should install a `clangd` extension for our IDE/editor.

For Neovim+lspconfig users, this is very easy, we can simply install `clangd`
via Mason and add a few setup lines in Lua:

```lua
require("lspconfig").clangd.setup{
capabilities = capabilities
}
```

**Second,** we need to generate a `.clangd` configuration file with the current
CUDA path argument.

Make sure you're in the right environment (with CUDA installed), and then from
the root of the repository, you can run:

```sh
echo "# Autogenerated, see .clangd_template\!" > .clangd && sed -e "/^#/d" -e "s|YOUR_CUDA_PATH|$(dirname $(dirname $(which nvcc)))|" .clangd_template >> .clangd
```

**Third,** we'll need a
[`compile_comands.json`](https://clang.llvm.org/docs/JSONCompilationDatabase.html)
file.

If we're working on PyTorch bindings, one option is to generate this using
[`bear`](https://github.com/rizsotto/Bear):

```sh
sudo apt update
sudo apt install bear

# From the repository root, 3dgs-exercise/.
#
# This will save a file at 3dgs-exercise/compile_commands.json, which clangd
# should be able to detect.
bear -- pip install -e gsplat/

# Make sure the file is not empty!
cat compile_commands.json
```

Alternatively: if we're working directly in C (and don't need any PyTorch
binding stuff), we can generate via CMake:

```sh
# From 3dgs-exercise/csrc/build.
#
# This will save a file at 3dgs-exercise/csrc/build/compile_commands.json, which
# clangd should be able to detect.
cmake .. -DCMAKE_EXPORT_COMPILE_COMMANDS=on

# Make sure the file is not empty!
cat compile_commands.json
```bash
pip install -r examples/requirements.txt
python examples/simple_trainer.py
```

**Known issues**
## Development and Contribution

- The torch extensions include currently raises an error:
`In included file: use of undeclared identifier 'noinline'; did you mean 'inline'?`
This repository was born from the curiosity of people on the Nerfstudio team trying to understand a new rendering technique. We welcome contributions of any kind and are open to feedback, bug-reports, and improvements to help expand the capabilities of this software. Please check [docs/DEV.md](docs/DEV.md) for more info about development.
129 changes: 129 additions & 0 deletions docs/DEV.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Development

## Installation

Clone the repository and submodules with

```bash
git clone --recurse-submodules URL
```

For CUDA development, it is recommend to install with `BUILD_NO_CUDA=1`, which
will disable compiling during pip install, and instead use JIT compiling on your
first run. The benefit of JIT compiling is that it does incremental compiling as
you modify your cuda code so it is much faster than re-compile through pip. Note
the JIT compiled library can be found under `~/.cache/torch_extensions/py*-cu*/`.

```bash
BUILD_NO_CUDA=1 pip install -e .[dev]
```

If you won't touch the underlying CUDA code, you can just install with compiling:

```bash
pip install -e .[dev]
```

## Protect Main Branch over Pull Request

It is recommended to commit the code into the main branch as a PR over a hard push, as the PR would protect the main branch if the code break tests but a hard push won't. Also squash the commits before merging the PR so it won't span the git history.

The curret tests that will be triggered by PR:

- `.github/workflows/core_tests.yml`: Black formating. Pytests.
- `.github/workflows/doc.yml`: Doc build.

Because we check for black formatting, it is recommend to run black before commit in the code:

```bash
black . gsplat/ tests/ examples/
```

Since there is no GPU supported on github workflow container, we don't test against those cuda unit tests under `tests/` in PR. So it is recommended to check test pass locally before committing:

```bash
pytest tests/ # check for all tests
pytest tests/test_cov2d_bounds.py # check for a single test file.
```

Note that `pytest` recognizes and runs all functions named as `test_*`, so you should name the
test functions in this pattern. See `test_cov2d_bounds.py` as an example.

## Build the Doc Locally

If you want to contribute to the doc, here is the way to build it locally. The source code of the doc can be found in `docs/source` and the built doc will be in `_build/`. If you are interested in contributing with the doc, here are some examples on documentation: [viser](https://github.com/nerfstudio-project/viser/tree/main/docs/source), [nerfstudio](https://github.com/nerfstudio-project/nerfstudio/tree/main/docs), [nerfacc](https://github.com/KAIR-BAIR/nerfacc/tree/master/docs/source).

```bash
pip install -e .[dev]
pip install -r docs/requirements.txt
sphinx-build docs/source _build
```

## Clangd setup (for Neovim)

[clangd](https://clangd.llvm.org/) is a nice tool for providing completions,
type checking, and other helpful features in C++. It requires some extra effort
to get set up for CUDA development, but there are fortunately only three steps
here.

**First,** we should install a `clangd` extension for our IDE/editor.

For Neovim+lspconfig users, this is very easy, we can simply install `clangd`
via Mason and add a few setup lines in Lua:

```lua
require("lspconfig").clangd.setup{
capabilities = capabilities
}
```

**Second,** we need to generate a `.clangd` configuration file with the current
CUDA path argument.

Make sure you're in the right environment (with CUDA installed), and then from
the root of the repository, you can run:

```sh
echo "# Autogenerated, see .clangd_template\!" > .clangd && sed -e "/^#/d" -e "s|YOUR_CUDA_PATH|$(dirname $(dirname $(which nvcc)))|" .clangd_template >> .clangd
```

**Third,** we'll need a
[`compile_comands.json`](https://clang.llvm.org/docs/JSONCompilationDatabase.html)
file.

If we're working on PyTorch bindings, one option is to generate this using
[`bear`](https://github.com/rizsotto/Bear):

```sh
sudo apt update
sudo apt install bear

# From the repository root, 3dgs-exercise/.
#
# This will save a file at 3dgs-exercise/compile_commands.json, which clangd
# should be able to detect.
bear -- pip install -e gsplat/

# Make sure the file is not empty!
cat compile_commands.json
```

Alternatively: if we're working directly in C (and don't need any PyTorch
binding stuff), we can generate via CMake:

```sh
# From 3dgs-exercise/csrc/build.
#
# This will save a file at 3dgs-exercise/csrc/build/compile_commands.json, which
# clangd should be able to detect.
cmake .. -DCMAKE_EXPORT_COMPILE_COMMANDS=on

# Make sure the file is not empty!
cat compile_commands.json
```

<!--
**Known issues**
- The torch extensions include currently raises an error:
`In included file: use of undeclared identifier 'noinline'; did you mean 'inline'?` -->
3 changes: 3 additions & 0 deletions examples/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
numpy
tyro
Pillow