Grasping is crucial for many daily activities, and its impairment considerably impacts quality of life and autonomy. Attempts to restore this function may rely on various approaches and devices (functional electrical stimulation, exoskeletons, prosthesis…) with command modalities often exert considerable cognitive loads on users and lack controllability and intuitiveness in daily life . i-GRIP paves the way to novel user interface for grasping movement control in which the user delegates the grasping task decisions to the device, only moving their (potentially prosthetic) hand toward the targeted object.
Required information for assisting an ongoing grasping task is the following: 1) hand position and orientation; 2) object position and orientation; 3) object nature (including shape and potentially weight and texture). We use an OAK-D S2 (Luxonis) stereoscopic RGB camera as a data acquisition sensor. Hand pose estimation is performed using Google’s MediaPipe, leveraging stereoscopic vision for depth estimation with DepthAI . Object identification and pose estimation is achieved using CosyPose, a multi-object 6D pose estimator trained on a set of objects with known 3D models.
Three metrics are concurrently used to analyse a hand's movement :
- distance to every detected objects
- time derivative of the distance to every detected objects
- impacts on every detected objects' meshes of cones of rays builted uppon the extrapolated hand's trajectory
conda env create -f environment.yml
The installation may take some time as several packages must be downloaded and installed/compiled.
As torch install are rarely well handled in environment.yml install, run:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu118
Clone cosypose repository from here , without installing dependecies from environment.yml. That is to say :
- either : - use gitkraken to clone the repo, initiate submodules and pull LFS files -
git clone --recurse-submodules https://github.com/Simple-Robotics/cosypose.git cd cosypose git lfs pull
python setup.py develop
Note :
in cosypose/lib3d/transform.py
you might need to comment the 4th line :
# eigenpy.switchToNumpyArray()
In your i-GRIP folder, run
pip install -e .
Required data paths are declared in config.py
and may be have to be adapted to your own architecture
Create a (real or symbolic) folder local_data
in your cosypose folder. You can then run download.py
to automatically download required data.
This script requires the rcclone
and wget libs, that can be installed by running :
curl https://rclone.org/install.sh | sudo bash
pip install wget
Follow these instructions to download neural networks and 3d models. You may run :
python -m cosypose.scripts.download --bop_dataset=ycbv
python -m cosypose.scripts.download --bop_dataset=tless
python -m cosypose.scripts.download --urdf_models=ycbv
python -m cosypose.scripts.download --urdf_models=tless.cad
python -m cosypose.scripts.download --model='detector-bop-ycbv-synt+real--292971'
python -m cosypose.scripts.download --model='coarse-bop-tless-synt+real--160982'
python -m cosypose.scripts.download --model='refiner-bop-tless-synt+real--881314'
python -m cosypose.scripts.download --model='detector-bop-tless-synt+real--452847'
python -m cosypose.scripts.download --model= 'coarse-bop-ycbv-synt+real--822463'
python -m cosypose.scripts.download --model='refiner-bop-ycbv-synt+real--631598'
see : https://developers.google.com/mediapipe/solutions/vision/hand_landmarker
or dl directly : https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/latest/hand_landmarker.task
Depending on the performances of your setup, the ray tracing used in the algorithm may be slow with the original, high definition ycbv and t-less 3d meshes. If that is the case, you may run the script simplify_meshes.py
, and adjust faces_factor
to your preferences.
i-GRIP was designed and tested using OAK-D cameras to recontruct depth maps along rgb frames. Yet, i-GRIP could in theory be used with any rgbd device, at the cost of rewritting your own RgbdCameras.py
file.
Cosypose was trained on two different datasets, that can be indistinguishly be used in i-GRIP :
If you don't have real objects from these datasets at your disposal, you can downloand sample images from TODO
In your i_grip environment, run :
python run_i_grip.py