English | 简体中文 | Tiếng việt
- OpenPose with Recurrent Neural Network
- Results
- Installation
- Quick Start Overview
- Structures
- Send Us Feedback!
- Thanks
- License
- Exception
This project provides an implementation anomaly detection of OpenPose + RNN. For simplicity, we refer to this model as OpenPoseRNN throughout the rest of this readme. And we also thank to Dr. Minh Chuan-Pham and Dr. Quoc Viet-Hoang supported for this project. This deep learning-based system is being applied in developed countries worldwide such as the UK, France, the USA, and various Asian countries like Japan, South Korea, China, among others. Some universities such as Tsinghua University, Peking University, Stanford University,... used technology to anti-cheating at examination. It is being implemented in collaboration with examination invigilators to achieve the highest effectiveness and ensure the utmost fairness in examinations.
Testing with 12422TN class on OpenPose
For this part, we use YOLOv3 to detection human in rooms. To evaluation this model we use trainval35k set, which is split from the original MS-COCO 2017 dataset. Result shown in Table 1.
Table 1. Comparison result of human detection in images with 3 other models
Models | Avg. | Precision | IoU |
---|---|---|---|
Faster-RCNN | 21.9 | 42.7 | - |
SSD300 | 25.2 | 43.1 | 26.1 |
YOLOv2 | 21.6 | 44 | 19.2 |
Ours | 25.3 | 44.5 | 25.9 |
For skeleton position localization, we use OpenPose to detection skeleton human. To evaluate this model we used MS-COCO2015 datasets. Result shown in Table 2.
Table 2. Evaluation results of skeleton position localization compared with 2 other models.
Models | [email protected] | [email protected] | AP medium | Ap large |
---|---|---|---|---|
AlphaPose | 89.2 | 79.1 | 69 | 78.6 |
Detectron Mask-RCNN | 25.2 | 43.1 | 26.1 | 68.2 |
Ours | 88.0 | 73.1 | 62.2 | 78.6 |
Beside, we also use FPS and GPU Memory to evluate this. Result shown in Table 3 for Multi people and table 4 for single people.
Table 3. Results in Multi-people
Models | GPU Memory Usage | FPS(Frame Per second |
---|---|---|
AlphaPose | 73.4% | 1.15 |
Ours | 21.3% | 18.39 |
Table 4. Results in Single-people
Models | GPU Memory Usage | FPS(Frame Per second |
---|---|---|
AlphaPose | 60.3% | 23.71 |
Ours | 21.3% | 18.77 |
We used Recurrrent Neural Network (RNN) to classify action of attendance in room. To evaluate this part we use 2 metrics are Confusion Matrix and Receiver operating characteristic (ROC). Result shown in Figure 1 and Figure 2
Fig 1. Result of all label with Confusion Matrix
Fig 2. Result of all label with ROC
Requirements python >= 3.7
- Install dependences library
pip install -r requirements.txt
- Install dependences files
- Change directory to
OpenPose/graph_models/VGG_origin
, you can change directory with this commandcd OpenPose/graph_models/VGG_origin
- After you must run
file_requirements.py
or
python file_requirements.py
- Install dependences files with other steps ( Optional )
- If you step 2 not successfully you can download weights from Google Drive
- Move folder
graph_models
downloaded toOpenPose\graph_models
- Install dependences library
- You can load dependences library with
openpose.yaml
file. - You can find
openpose.yaml
file in folderEnvironment
- You can load dependences library with
- Install dependences files
- Change directory to
OpenPose/graph_models/VGG_origin
, you can change directory with this commandcd OpenPose/graph_models/VGG_origin
- After you must run
file_requirements.py
or
python file_requirements.py
- Install dependences files with other steps ( Optional )
- If you step 2 not successfully you can download weights from Google Drive
- Move folder
graph_models
downloaded to ```OpenPose\graph_models``
- Quick Run
- You can run this file
main.py
to start this project.
- [Optinal]To trainning model you using
create_data.py
to export data points and move to folderAction\trainning
and using .ipnb filetrain.ipnb
to train. - [Optinal] Using VGG_origin can be slow, if you don't have GPU you can change model to
mobilenet
to predict faster.- To change model to
mobilenet
, navigation to filemain.py
in main folder. - In line 14, change
estimator = load_pretrain_model('VGG_origin')
toestimator = load_pretrain_model('mobilenet_thin')
- To change model to
4.[Optinal] To use your weight, you can change it in main.py
, in line 15 change action_classifier = load_action_premodel('open_pose2\Action\framewise_recognition_under_scene.h5')
to action_classifier = load_action_premodel('path_to_your_weights')
Structures for all models
Our project is open source for research purposes, and we want to improve it! So let us know (create a new GitHub issue or pull request, email us, etc.) if you...
- Find/fix any bug (in functionality or speed) or know how to speed up or improve any part of OpenPoseRNN.
- Want to add/show some cool functionality/demo/project made on top of Students Tracking. We can add your project link to your Issue
Thank you for the guidance of Dr.Minh Chuan-Pham in the process of creating this project, as well as the evaluation board consisting of Dr.Quoc Viet-Hoang, who helped us improve the results and provided feedback for this project.
This project is freely available for free non-commercial use. If it useful you can give 1 star. Thanks for using.
This project was created by a former student of Tran Quang Khai High School, Hung Yen. Students of Tran Quang Khai High School and all other high school students in Vietnam are prohibited from copying, quoting, using this project as a scientific research paper, or using this project as their own project under any circumstances.