Skip to content

ARMLab code for Next Best Sense work in uncertainty-guided view and touch exploration.

Notifications You must be signed in to change notification settings

armlabstanford/NextBestSense

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

72 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 Next Best Sense: Guiding Vision and Touch with FisherRF for 3D Gaussian Splatting

Submitted to IEEE International Conference on Robotics & Automation (ICRA) 2025

image

Project ArXiv

This repo houses the core code for Next Best Sense. We exclusively use Docker for this work, which allows for self-contained code that won't mess up the host OS.

Quick Start and Setup

The pipeline has been tested on Ubuntu 22.04. To avoid installation pains and dependency conflicts, we have a publicly available Dockerfile that includes everything here.

To pull from Docker, run

docker pull peasant98/active-touch-gs:latest

Requirements (Not Using Docker):

  • CUDA 11+ and a GPU with at least 16GB VRAM
  • Python 3.8+
  • ROS1 Noetic
  • Conda or Mamba (optional)
  • Kinova Gen3 robot (7 DoF)

Dependencies (from Nerfstudio)

Install PyTorch with CUDA (this repo has been tested with CUDA 11.8 and CUDA 12.1).

For CUDA 11.8:

conda create --name touch-gs python=3.8
conda activate touch-gs

pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118

conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit

See Dependencies in the Installation documentation for more.

Repo Cloning This repository should be used as a group of packages for a ROS1 workspace for controlling a Kinova arm.

Install Our Version of Nerfstudio

We note that we have our own version of Nerfstudio, which supports active learning, which can be found here.

To install, the steps are:

git clone https://github.com/JiangWenPL/FisherRF-ns
cd FisherRF-ns

# install the package in editable mode for easy 
python3 -m pip install -e . -v

Getting Next Best Sense Setup and Training

First, build the workspace:

# be outside the NextBestSense dir (ROS workspace)
catkin build
source install/setup.bash

Then, run the launch file, which will open the controller and vision node. Assuming you have our version of Nerfstudio installed, this will work as follows:

  1. Run Kinova pipeline.
roslaunch kinova_control moveit_controller.launch

We have made an end-to-end pipeline that will take care of setting up the data, training, and evaluating our method. Note that we will release the code for running the ablations (which includes the baselines) soon!

Get Rendered Video

You can get rendered videos with a custom camera path detailed here. This is how we were able to get our videos on our website.

About

ARMLab code for Next Best Sense work in uncertainty-guided view and touch exploration.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published