Skip to content

emoullet/rgbd_expe_recorder

Repository files navigation

i-GRIP: a vision-based interface for grasping intention detection and grip selection

Table of content

Overview

Grasping is crucial for many daily activities, and its impairment considerably impacts quality of life and autonomy. Attempts to restore this function may rely on various approaches and devices (functional electrical stimulation, exoskeletons, prosthesis…) with command modalities often exert considerable cognitive loads on users and lack controllability and intuitiveness in daily life . i-GRIP paves the way to novel user interface for grasping movement control in which the user delegates the grasping task decisions to the device, only moving their (potentially prosthetic) hand toward the targeted object.

Hardware

Camera

i-GRIP was designed and tested using OAK-D cameras to recontruct depth maps along rgb frames. Yet, this code could in theory be used with any rgbd device, at the cost of adapting RgbdCameras.py file to your own framework.

Installation

Follow DepthAI (software to use OAK cameras) installation instructions : https://docs.luxonis.com/hardware/platform/deploy/usb-deployment-guide/

Then, create an environment with required dependencies with

conda env create -f environment.yml

The installation may take some time as several packages must be downloaded and installed/compiled.

Files and paths

Required data paths are declared in config.py and may be have to be adapted to your own architecture

"Session_X" folder must be moved to your experiment folder and contains : - "Session_X_participants_database.csv" : _ Each column header represents a parameter that will need to be filled for every participant. This file is read when the session is launched and the interface is populated with entries for every paramater found in the file. This file must be modified according to the data you need to manually collect for each participant. - "Session_X_participants_pseudos_database.csv" : _ Stores the pseudo generated by the interface for each participant. This file should be modified by hand only to erase a participant, as a void file is generated (with the required column headers) if it is not present in this folder when the interface is launched. - "Session_X_experimental_parameters.csv" : _ This file contains a line for each parameter of the experiment which is composed, first, of the label of the parameter and then of a list of values the parameter must be drawn from to make up a trial. Example : If the participant is expected to reach for a given object with a given hand, this file should be : hand ; right ; left object ; object_a ; object_b - "Session_X_recording_parameters.csv" : _ This file contains the parameters that define the recording process of your experiment. Example : camera resolution, fps... - "Session_X_instructions_languages.csv" : _ This file is used for the display of intructions to the participant about the task they need to perform for a given trial. It contains the translation of each parameter value in french and english, as well as their introduction sentences if needed (! code may need modifications to handle your intros). The translation may be provided either by modifying this file or via the interface when employed for the first time - "Session_X_processing_monitoring.csv" _ This file keeps track of the recording and (if applicable) of the processing phases of your experiment, for each participant

Mediapipe

see : https://developers.google.com/mediapipe/solutions/vision/hand_landmarker

or dl directly : https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/latest/hand_landmarker.task

Using the interface

In your environment, run :

python experiment_interface.py

Mode Options

The experiment_interface.py script can be run in different modes, each serving a specific purpose:

  • record: This mode is used to record new experimental data. It will prompt the user to input participant information and experimental parameters, and then proceed to record the data based on the specified settings.
  • pre_process: This mode allows to pre_process the recorded data. It allows to cut videos and depth maps. Requires to be modified according to your needs
  • replay : This mode allows to replay your pre_processed data with your own computer vision tools and logic. Requires to be modified according to your needs
  • analysis: This mode is used for analyzing the replayed data. It generates reports, visualizations, or any other form of analysis required to interpret the results of the experiment. Requires to be modified according to your needs.

To run the script in a specific mode, use the following command:

python experiment_interface.py --mode <mode>

Replace <mode> with one of the available modes: record, pre_process, replay or analysis.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages