Skip to content

Commit

Permalink
Update README, fix pose graph timing
Browse files Browse the repository at this point in the history
  • Loading branch information
brentyi committed Dec 11, 2024
1 parent 8c43999 commit 91d0d9a
Show file tree
Hide file tree
Showing 2 changed files with 52 additions and 52 deletions.
97 changes: 45 additions & 52 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,23 @@

[![pyright](https://github.com/brentyi/jaxls/actions/workflows/pyright.yml/badge.svg)](https://github.com/brentyi/jaxls/actions/workflows/pyright.yml)

**`jaxls`** is a library for nonlinear least squares in JAX.
**`jaxls`** is a library for solving sparse [NLLS](https://en.wikipedia.org/wiki/Non-linear_least_squares) and [IRLS](https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares) problems in JAX.
These are common in classical robotics and computer vision, as well as for MAP estimation with Gaussian likelihoods.

We provide a factor graph interface for specifying and solving least squares
problems. We accelerate optimization by analyzing the structure of graphs:
repeated factor and variable types are vectorized, and the sparsity of adjacency
in the graph is translated into sparse matrix operations.
To install on Python 3.12+:

Use cases are primarily in least squares problems that are (1) sparse and (2)
inefficient to solve with gradient-based methods.
```bash
pip install git+https://github.com/brentyi/jaxls.git
```

### Overviews

We provide a factor graph interface for specifying and solving least squares
problems. **`jaxls`** takes advantage of structure in graphs: repeated factor
and variable types are vectorized, and sparsity of adjacency is translated into
sparse matrix operations.

Currently supported:
Supported:

- Automatic sparse Jacobians.
- Optimization on manifolds.
Expand All @@ -22,46 +28,16 @@ Currently supported:
- Sparse iterative with Conjugate Gradient.
- Preconditioning: block and point Jacobi.
- Inexact Newton via Eisenstat-Walker.
- Sparse direct with Cholesky / CHOLMOD, on CPU.
- Dense Cholesky for smaller problems.
- Recommended for most problems.
- Dense Cholesky.
- Fast for small problems.
- Sparse Cholesky, on CPU. (CHOLMOD)

For the first iteration of this library, written for [IROS 2021](https://github.com/brentyi/dfgo), see
[jaxfg](https://github.com/brentyi/jaxfg). `jaxls` is a rewrite that is faster
and easier to use. For additional references, see inspirations like
`jaxls` borrows heavily from libraries like
[GTSAM](https://gtsam.org/), [Ceres Solver](http://ceres-solver.org/),
[minisam](https://github.com/dongjing3309/minisam),
[SwiftFusion](https://github.com/borglab/SwiftFusion),
[g2o](https://github.com/RainerKuemmerle/g2o).

### Installation

`jaxls` supports `python>=3.12`:

```bash
pip install git+https://github.com/brentyi/jaxls.git
```

**Optional: CHOLMOD dependencies**

By default, we use an iterative linear solver. This requires no extra
dependencies. For some problems, like those with banded matrices, a direct
solver can be much faster.

For Cholesky factorization via CHOLMOD, we rely on SuiteSparse:

```bash
# Option 1: via conda.
conda install conda-forge::suitesparse

# Option 2: via apt.
sudo apt install -y libsuitesparse-dev
```

You'll also need _scikit-sparse_:

```bash
pip install scikit-sparse
```
and [g2o](https://github.com/RainerKuemmerle/g2o).

### Pose graph example

Expand Down Expand Up @@ -111,8 +87,12 @@ factors = [
Factors with similar structure, like the first two in this example, will be
vectorized under-the-hood.

**Solving optimization problems.** We can set up the optimization problem, solve
it, and print the solutions:
Batched inputs can also be manually constructed, and are detected by inspecting
the shape of variable ID arrays in the input. Just add a leading batch axis to
all elements in the arguments tuple.

**Solving optimization problems.** To set up the optimization problem, solve
it, and print solutions:

```python
graph = jaxls.FactorGraph.make(factors, pose_vars)
Expand All @@ -122,11 +102,24 @@ print("Pose 0", solution[pose_vars[0]])
print("Pose 1", solution[pose_vars[1]])
```

### Limitations
### CHOLMOD setup

By default, we use an iterative linear solver. This requires no extra
dependencies. For some problems, like those with banded matrices, a direct
solver can be much faster.

For Cholesky factorization via CHOLMOD, we rely on SuiteSparse:

There are many practical features that we don't currently support:
```bash
# Option 1: via conda.
conda install conda-forge::suitesparse

- GPU accelerated Cholesky factorization. (for CHOLMOD we wrap [scikit-sparse](https://scikit-sparse.readthedocs.io/en/latest/), which runs on CPU only)
- Covariance estimation / marginalization.
- Incremental solves.
- Analytical Jacobians.
# Option 2: via apt.
sudo apt install -y libsuitesparse-dev
```

You'll also need _scikit-sparse_:

```bash
pip install scikit-sparse
```
7 changes: 7 additions & 0 deletions examples/pose_graph_g2o.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,10 @@

import _g2o_utils

jax.config.update("jax_compilation_cache_dir", "/tmp/jax_cache")
jax.config.update("jax_persistent_cache_min_entry_size_bytes", -1)
jax.config.update("jax_persistent_cache_min_compile_time_secs", 0)


def main(
g2o_path: pathlib.Path = pathlib.Path(__file__).parent / "data/input_M3500_g2o.g2o",
Expand All @@ -40,16 +44,19 @@ def main(
for pose_var, pose in zip(g2o.pose_vars, g2o.initial_poses)
)
)
jax.block_until_ready(initial_vals)

with jaxls.utils.stopwatch("Running solve"):
solution_vals = graph.solve(
initial_vals, trust_region=None, linear_solver=linear_solver
)
jax.block_until_ready(solution_vals)

with jaxls.utils.stopwatch("Running solve (again)"):
solution_vals = graph.solve(
initial_vals, trust_region=None, linear_solver=linear_solver
)
jax.block_until_ready(solution_vals)

# Plot
plt.figure()
Expand Down

0 comments on commit 91d0d9a

Please sign in to comment.