From c9b4a44c7db3921c1216d5ad4e417599d82e8bcd Mon Sep 17 00:00:00 2001 From: Ruilong Li <397653553@qq.com> Date: Thu, 12 Oct 2023 18:29:26 -0700 Subject: [PATCH 1/2] workflow for pypi publish --- .github/workflows/publish.yml | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 .github/workflows/publish.yml diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml new file mode 100644 index 000000000..f8070aed6 --- /dev/null +++ b/.github/workflows/publish.yml @@ -0,0 +1,33 @@ +# This workflows will upload a Python Package using twine when a release is created +# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries + +name: Upload Python Package + +on: + release: + types: [created] + branches: [main] + +jobs: + deploy: + runs-on: ubuntu-latest + environment: production + + steps: + - uses: actions/checkout@v3 + - name: Set up Python + uses: actions/setup-python@v4 + with: + python-version: '3.7' + - name: Install dependencies + run: | + python -m pip install build twine + - name: Strip unsupported tags in README + run: | + sed -i '//,//d' README.md + - name: Build and publish + env: + PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }} + run: | + BUILD_NO_CUDA=1 python -m build + twine upload --username __token__ --password $PYPI_TOKEN dist/* \ No newline at end of file From 89b0d422bc257676241cc456e5b01f7f640c7f08 Mon Sep 17 00:00:00 2001 From: Ruilong Li <397653553@qq.com> Date: Thu, 12 Oct 2023 18:56:37 -0700 Subject: [PATCH 2/2] update readme and DEV.md --- README.md | 140 +++++--------------------------------- docs/DEV.md | 129 +++++++++++++++++++++++++++++++++++ examples/requirements.txt | 3 + 3 files changed, 150 insertions(+), 122 deletions(-) create mode 100644 docs/DEV.md create mode 100644 examples/requirements.txt diff --git a/README.md b/README.md index 86c3e5753..f23c0d466 100644 --- a/README.md +++ b/README.md @@ -1,143 +1,39 @@ # gsplat -Our version of differentiable gaussian rasterizer +[![Core Tests.](https://github.com/nerfstudio-project/gsplat/actions/workflows/core_tests.yml/badge.svg?branch=main)](https://github.com/nerfstudio-project/gsplat/actions/workflows/core_tests.yml) +[![Docs](https://github.com/nerfstudio-project/gsplat/actions/workflows/doc.yml/badge.svg?branch=main)](https://github.com/nerfstudio-project/gsplat/actions/workflows/doc.yml) -## Installation - -Clone the repository and submodules with - -```bash -git clone --recurse-submodules URL -``` - -For CUDA development, it is recommend to install with `BUILD_NO_CUDA=1`, which -will disable compiling during pip install, and instead use JIT compiling on your -first run. The benefit of JIT compiling is that it does incremental compiling as -you modify your cuda code so it is much faster than re-compile through pip. Note -the JIT compiled library can be found under `~/.cache/torch_extensions/py*-cu*/`. +[https://nerfstudio-project.github.io/gsplat/](https://nerfstudio-project.github.io/gsplat/) -```bash -BUILD_NO_CUDA=1 pip install -e .[dev] -``` - -If you won't touch the underlying CUDA code, you can just install with compiling: - -```bash -pip install -e .[dev] -``` +gsplat is an open-source library for CUDA accelerated rasterization of gaussians with python bindings. It is inspired by the SIGGRAPH paper [3D Gaussian Splatting for Real-Time Rendering of Radiance Fields](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/). This libary contains the neccessary components for efficient 3D to 2D projection, sorting, and alpha compositing of gaussians and their associated backward passes for inverse rendering. -## Development +![Teaser](/docs/source/imgs/training.gif?raw=true) -## Protect Main Branch over Pull Request. - -It is recommended to commit the code into the main branch as a PR over a hard push, as the PR would protect the main branch if the code break tests but a hard push won't. Also squash the commits before merging the PR so it won't span the git history. - -The curret tests that will be triggered by PR: +## Installation -- `.github/workflows/core_tests.yml`: Black formating. Pytests. -- `.github/workflows/doc.yml`: Doc build. +**Dependence**: Please install [Pytorch](https://pytorch.org/get-started/locally/) first. -Because we check for black formatting, it is recommend to run black before commit in the code: +The easist way is to install from PyPI. In this way it will build the CUDA code **on the first run** (JIT). ```bash -black . gsplat/ tests/ examples/ +pip install gsplat ``` -Since there is no GPU supported on github workflow container, we don't test against those cuda unit tests under `tests/` in PR. So it is recommended to check test pass locally before committing: +Or install from source. In this way it will build the CUDA code during installation. ```bash -pytest tests/ # check for all tests -pytest tests/test_cov2d_bounds.py # check for a single test file. +pip install git+https://github.com/nerfstudio-project/gsplat.git ``` -Note that `pytest` recognizes and runs all functions named as `test_*`, so you should name the -test functions in this pattern. See `test_cov2d_bounds.py` as an example. - - -## Build the Doc Locally -If you want to contribute to the doc, here is the way to build it locally. The source code of the doc can be found in `docs/source` and the built doc will be in `_build/`. If you are interested in contributing with the doc, here are some examples on documentation: [viser](https://github.com/nerfstudio-project/viser/tree/main/docs/source), [nerfstudio](https://github.com/nerfstudio-project/nerfstudio/tree/main/docs), [nerfacc](https://github.com/KAIR-BAIR/nerfacc/tree/master/docs/source). - -``` -pip install -e .[dev] -pip install -r docs/requirements.txt -sphinx-build docs/source _build -``` - - - -# Brief walkthrough +## Examples -The main python bindings for rasterization are found by importing gsplat +Fit a 2D image with 3D Gaussians. -``` -import gsplat -help(gsplat) -``` - -# clangd setup (for Neovim) - -[clangd](https://clangd.llvm.org/) is a nice tool for providing completions, -type checking, and other helpful features in C++. It requires some extra effort -to get set up for CUDA development, but there are fortunately only three steps -here. - -**First,** we should install a `clangd` extension for our IDE/editor. - -For Neovim+lspconfig users, this is very easy, we can simply install `clangd` -via Mason and add a few setup lines in Lua: - -```lua -require("lspconfig").clangd.setup{ - capabilities = capabilities -} -``` - -**Second,** we need to generate a `.clangd` configuration file with the current -CUDA path argument. - -Make sure you're in the right environment (with CUDA installed), and then from -the root of the repository, you can run: - -```sh -echo "# Autogenerated, see .clangd_template\!" > .clangd && sed -e "/^#/d" -e "s|YOUR_CUDA_PATH|$(dirname $(dirname $(which nvcc)))|" .clangd_template >> .clangd -``` - -**Third,** we'll need a -[`compile_comands.json`](https://clang.llvm.org/docs/JSONCompilationDatabase.html) -file. - -If we're working on PyTorch bindings, one option is to generate this using -[`bear`](https://github.com/rizsotto/Bear): - -```sh -sudo apt update -sudo apt install bear - -# From the repository root, 3dgs-exercise/. -# -# This will save a file at 3dgs-exercise/compile_commands.json, which clangd -# should be able to detect. -bear -- pip install -e gsplat/ - -# Make sure the file is not empty! -cat compile_commands.json -``` - -Alternatively: if we're working directly in C (and don't need any PyTorch -binding stuff), we can generate via CMake: - -```sh -# From 3dgs-exercise/csrc/build. -# -# This will save a file at 3dgs-exercise/csrc/build/compile_commands.json, which -# clangd should be able to detect. -cmake .. -DCMAKE_EXPORT_COMPILE_COMMANDS=on - -# Make sure the file is not empty! -cat compile_commands.json +```bash +pip install -r examples/requirements.txt +python examples/simple_trainer.py ``` -**Known issues** +## Development and Contribution -- The torch extensions include currently raises an error: - `In included file: use of undeclared identifier 'noinline'; did you mean 'inline'?` +This repository was born from the curiosity of people on the Nerfstudio team trying to understand a new rendering technique. We welcome contributions of any kind and are open to feedback, bug-reports, and improvements to help expand the capabilities of this software. Please check [docs/DEV.md](docs/DEV.md) for more info about development. \ No newline at end of file diff --git a/docs/DEV.md b/docs/DEV.md new file mode 100644 index 000000000..bb2cd35f7 --- /dev/null +++ b/docs/DEV.md @@ -0,0 +1,129 @@ +# Development + +## Installation + +Clone the repository and submodules with + +```bash +git clone --recurse-submodules URL +``` + +For CUDA development, it is recommend to install with `BUILD_NO_CUDA=1`, which +will disable compiling during pip install, and instead use JIT compiling on your +first run. The benefit of JIT compiling is that it does incremental compiling as +you modify your cuda code so it is much faster than re-compile through pip. Note +the JIT compiled library can be found under `~/.cache/torch_extensions/py*-cu*/`. + +```bash +BUILD_NO_CUDA=1 pip install -e .[dev] +``` + +If you won't touch the underlying CUDA code, you can just install with compiling: + +```bash +pip install -e .[dev] +``` + +## Protect Main Branch over Pull Request + +It is recommended to commit the code into the main branch as a PR over a hard push, as the PR would protect the main branch if the code break tests but a hard push won't. Also squash the commits before merging the PR so it won't span the git history. + +The curret tests that will be triggered by PR: + +- `.github/workflows/core_tests.yml`: Black formating. Pytests. +- `.github/workflows/doc.yml`: Doc build. + +Because we check for black formatting, it is recommend to run black before commit in the code: + +```bash +black . gsplat/ tests/ examples/ +``` + +Since there is no GPU supported on github workflow container, we don't test against those cuda unit tests under `tests/` in PR. So it is recommended to check test pass locally before committing: + +```bash +pytest tests/ # check for all tests +pytest tests/test_cov2d_bounds.py # check for a single test file. +``` + +Note that `pytest` recognizes and runs all functions named as `test_*`, so you should name the +test functions in this pattern. See `test_cov2d_bounds.py` as an example. + +## Build the Doc Locally + +If you want to contribute to the doc, here is the way to build it locally. The source code of the doc can be found in `docs/source` and the built doc will be in `_build/`. If you are interested in contributing with the doc, here are some examples on documentation: [viser](https://github.com/nerfstudio-project/viser/tree/main/docs/source), [nerfstudio](https://github.com/nerfstudio-project/nerfstudio/tree/main/docs), [nerfacc](https://github.com/KAIR-BAIR/nerfacc/tree/master/docs/source). + +```bash +pip install -e .[dev] +pip install -r docs/requirements.txt +sphinx-build docs/source _build +``` + +## Clangd setup (for Neovim) + +[clangd](https://clangd.llvm.org/) is a nice tool for providing completions, +type checking, and other helpful features in C++. It requires some extra effort +to get set up for CUDA development, but there are fortunately only three steps +here. + +**First,** we should install a `clangd` extension for our IDE/editor. + +For Neovim+lspconfig users, this is very easy, we can simply install `clangd` +via Mason and add a few setup lines in Lua: + +```lua +require("lspconfig").clangd.setup{ + capabilities = capabilities +} +``` + +**Second,** we need to generate a `.clangd` configuration file with the current +CUDA path argument. + +Make sure you're in the right environment (with CUDA installed), and then from +the root of the repository, you can run: + +```sh +echo "# Autogenerated, see .clangd_template\!" > .clangd && sed -e "/^#/d" -e "s|YOUR_CUDA_PATH|$(dirname $(dirname $(which nvcc)))|" .clangd_template >> .clangd +``` + +**Third,** we'll need a +[`compile_comands.json`](https://clang.llvm.org/docs/JSONCompilationDatabase.html) +file. + +If we're working on PyTorch bindings, one option is to generate this using +[`bear`](https://github.com/rizsotto/Bear): + +```sh +sudo apt update +sudo apt install bear + +# From the repository root, 3dgs-exercise/. +# +# This will save a file at 3dgs-exercise/compile_commands.json, which clangd +# should be able to detect. +bear -- pip install -e gsplat/ + +# Make sure the file is not empty! +cat compile_commands.json +``` + +Alternatively: if we're working directly in C (and don't need any PyTorch +binding stuff), we can generate via CMake: + +```sh +# From 3dgs-exercise/csrc/build. +# +# This will save a file at 3dgs-exercise/csrc/build/compile_commands.json, which +# clangd should be able to detect. +cmake .. -DCMAKE_EXPORT_COMPILE_COMMANDS=on + +# Make sure the file is not empty! +cat compile_commands.json +``` + + diff --git a/examples/requirements.txt b/examples/requirements.txt new file mode 100644 index 000000000..091c28e28 --- /dev/null +++ b/examples/requirements.txt @@ -0,0 +1,3 @@ +numpy +tyro +Pillow \ No newline at end of file