The proposed work primarily concentrates on offline gesture recognition as detailed in the paper. Our objective is to benchmark several lightweight models, such as ShuffleNet, and compare their performance to the heavier ResNeXt model featured in the study.
In this work we used Egogesture dataset that is a multi-modal large scale dataset for egocentric hand gesture recognition. This dataset provides the test-bed not only for gesture classification in segmented data but also for gesture detection in continuous data.
- MobileNet: Optimized for mobile and embedded devices with depth-wise separable convolutions for high-performance image recognition
- ShuffleNet: Minimizes computational complexity with channel shuffling, ideal for mobile and resource-constrained applications
We used pretrainet models on Jester dataset from this paper: link, they published their work on GitHub (link)
The results of our work are shown in the next graphic.
If you have any questions, suggestions, or feedback, we'd love to hear from you! Here's how you can reach out:
- Diego Barreto: [email protected]
- Matteo Zacchino: [email protected]