Skip to content

Latest commit

 

History

History
97 lines (77 loc) · 2.95 KB

README.md

File metadata and controls

97 lines (77 loc) · 2.95 KB

Uncertainty-aware Suction Grasping for Cluttered Scenes [RA-Letter 2024]

Official repository for the paper "Uncertainty-aware Suction Grasping for Cluttered Scenes"

Image Title

Requirements

Dataset

Download data and labels from SuctionNet webpage.

Environment

The code has been tested with CUDA 11.6 and pytorch 1.13.0 on ubuntu 20.04

Installation

Create new enviornment:

conda create --name grasp python=3.8

Activate the enviornment and install Pytorch 1.13.0:

conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia

Install Minkowski Engine:

git clone https://github.com/NVIDIA/MinkowskiEngine.git
cd MinkowskiEngine
python setup.py install --blas_include_dirs=${CONDA_PREFIX}/include --blas=openblas

Install prerequisites:

pip install -r requirements.txt

Install suctionnetAPI:

git clone https://github.com/intrepidChw/suctionnms.git
cd suctionnms
pip install .

git clone https://github.com/graspnet/suctionnetAPI
cd suctionnetAPI
pip install .

Data Preparation

  1. Precompute normal map for scenes:
cd dataset
python generate_normal_data.py --dataset_root '/path/to/SuctionNet/dataset'
  1. Precompute suction label for scenes:
cd dataset
python generate_suction_data.py --dataset_root '/path/to/SuctionNet/dataset'
  1. Download segmentation mask from UOIS, UOAIS.

Uasge

Training

For training, use the following command:

bash scripts/train.sh

Evaluation

For evaluation, use the following command, where 'xxxx' denotes splits:'seen', 'similar' or 'novel':

bash scripts/eval_xxxx.sh

Pre-trained Models

The pre-trained models for GraspNet dataset can be found here.

Citation

if you find our work useful, please cite

@ARTICLE{USIN_grasp,
  author={Cao, Rui and Yang, Biqi and Li, Yichuan and Fu, Chi-Wing and Heng, Pheng-Ann and Liu, Yun-Hui},
  journal={IEEE Robotics and Automation Letters}, 
  title={Uncertainty-Aware Suction Grasping for Cluttered Scenes}, 
  year={2024},
  volume={9},
  number={6},
  pages={4934-4941},
  keywords={Grasping;Uncertainty;Point cloud compression;Robots;Noise measurement;Three-dimensional displays;Predictive models;Deep learning in grasping and manipulation;perception for grasping and manipulation;computer vision for automation},
  doi={10.1109/LRA.2024.3385609}}

Contact

If you have any questions about this work, feel free to contact Rui Cao at [email protected]