Skip to content

datnguyen-tien204/OpennPose-with-RNN

Repository files navigation

header English | 简体中文 | Tiếng việt

Contents

  1. OpenPose with Recurrent Neural Network
  2. Results
  3. Installation
  4. Quick Start Overview
  5. Structures
  6. Send Us Feedback!
  7. Thanks
  8. License
  9. Exception

Introduction

This project provides an implementation anomaly detection of OpenPose + RNN. For simplicity, we refer to this model as OpenPoseRNN throughout the rest of this readme. And we also thank to Dr. Minh Chuan-Pham and Dr. Quoc Viet-Hoang supported for this project. This deep learning-based system is being applied in developed countries worldwide such as the UK, France, the USA, and various Asian countries like Japan, South Korea, China, among others. Some universities such as Tsinghua University, Peking University, Stanford University,... used technology to anti-cheating at examination. It is being implemented in collaboration with examination invigilators to achieve the highest effectiveness and ensure the utmost fairness in examinations.

Results

Summary Cheating Recognition ( using OpenPose + Yolov3+ Recurrent Neural Network)


Testing with 12422TN class on OpenPose

For Human Detection

For this part, we use YOLOv3 to detection human in rooms. To evaluation this model we use trainval35k set, which is split from the original MS-COCO 2017 dataset. Result shown in Table 1.

Table 1. Comparison result of human detection in images with 3 other models

Models Avg. Precision IoU
Faster-RCNN 21.9 42.7 -
SSD300 25.2 43.1 26.1
YOLOv2 21.6 44 19.2
Ours 25.3 44.5 25.9

For skeleton position localization

For skeleton position localization, we use OpenPose to detection skeleton human. To evaluate this model we used MS-COCO2015 datasets. Result shown in Table 2.

Table 2. Evaluation results of skeleton position localization compared with 2 other models.

Models [email protected] [email protected] AP medium Ap large
AlphaPose 89.2 79.1 69 78.6
Detectron Mask-RCNN 25.2 43.1 26.1 68.2
Ours 88.0 73.1 62.2 78.6

Beside, we also use FPS and GPU Memory to evluate this. Result shown in Table 3 for Multi people and table 4 for single people.

Table 3. Results in Multi-people

Models GPU Memory Usage FPS(Frame Per second
AlphaPose 73.4% 1.15
Ours 21.3% 18.39

Table 4. Results in Single-people

Models GPU Memory Usage FPS(Frame Per second
AlphaPose 60.3% 23.71
Ours 21.3% 18.77

For recognition

We used Recurrrent Neural Network (RNN) to classify action of attendance in room. To evaluate this part we use 2 metrics are Confusion Matrix and Receiver operating characteristic (ROC). Result shown in Figure 1 and Figure 2


Fig 1. Result of all label with Confusion Matrix


Fig 2. Result of all label with ROC

Installation

With Python Base

Requirements python >= 3.7

  1. Install dependences library
pip install -r requirements.txt
  1. Install dependences files
  • Change directory to OpenPose/graph_models/VGG_origin, you can change directory with this command cd OpenPose/graph_models/VGG_origin
  • After you must run file_requirements.py or
python file_requirements.py
  1. Install dependences files with other steps ( Optional )
  • If you step 2 not successfully you can download weights from Google Drive
  • Move folder graph_models downloaded to OpenPose\graph_models

With Anaconda

  1. Install dependences library
    • You can load dependences library with openpose.yaml file.
    • You can find openpose.yaml file in folder Environment
  2. Install dependences files
  • Change directory to OpenPose/graph_models/VGG_origin, you can change directory with this command cd OpenPose/graph_models/VGG_origin
  • After you must run file_requirements.py or
python file_requirements.py
  1. Install dependences files with other steps ( Optional )
  • If you step 2 not successfully you can download weights from Google Drive
  • Move folder graph_models downloaded to ```OpenPose\graph_models``

Quick Start Overview

With Python Base Environments and Anaconda Environment

  1. Quick Run
  • You can run this file main.py to start this project.
  1. [Optinal]To trainning model you using create_data.py to export data points and move to folder Action\trainning and using .ipnb file train.ipnb to train.
  2. [Optinal] Using VGG_origin can be slow, if you don't have GPU you can change model to mobilenet to predict faster.
    • To change model to mobilenet, navigation to file main.py in main folder.
    • In line 14, change estimator = load_pretrain_model('VGG_origin') to estimator = load_pretrain_model('mobilenet_thin')

4.[Optinal] To use your weight, you can change it in main.py, in line 15 change action_classifier = load_action_premodel('open_pose2\Action\framewise_recognition_under_scene.h5') to action_classifier = load_action_premodel('path_to_your_weights')

Structures

Structures for all models


Send Us FeedBack

Our project is open source for research purposes, and we want to improve it! So let us know (create a new GitHub issue or pull request, email us, etc.) if you...

  1. Find/fix any bug (in functionality or speed) or know how to speed up or improve any part of OpenPoseRNN.
  2. Want to add/show some cool functionality/demo/project made on top of Students Tracking. We can add your project link to your Issue

Thanks

Thank you for the guidance of Dr.Minh Chuan-Pham in the process of creating this project, as well as the evaluation board consisting of Dr.Quoc Viet-Hoang, who helped us improve the results and provided feedback for this project.

License

This project is freely available for free non-commercial use. If it useful you can give 1 star. Thanks for using.

Exception

This project was created by a former student of Tran Quang Khai High School, Hung Yen. Students of Tran Quang Khai High School and all other high school students in Vietnam are prohibited from copying, quoting, using this project as a scientific research paper, or using this project as their own project under any circumstances.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published