In this Readme File I have defined how to run the files in this repository.
-
Install Python 3.7.6
- Install using link :
- For Window user : https://www.python.org/ftp/python/3.7.6/python-3.7.6-amd64.exe
- For Mac User : https://www.python.org/ftp/python/3.7.6/python-3.7.6-macosx10.9.pkg
- Install using link :
-
Install Jupyter Notebook
- Run this Command in Command prompt :
- pip install jupyter notebook
- Run this Command in Command prompt :
-
Install Mediapipe
- Run the following command in jupyter notebook cell
- !pip install mediapipe --user
- Run the following command in jupyter notebook cell
-
install pydirectinput (for Hill Climb and temple run automation)
- Run the command in jupyter notebook cell :
- !pip install pydirectinput
- Run the command in jupyter notebook cell :
-
- Emotion_Detection.h5 : This is the model which is already trained.
- haarcascade_frontalface_default.xml : This file Contain function that are needed to clasify face.
- Test.py : This is the file which needed to be run in order to detect your expression.
- Train.py : If you want to train the model on your own dataset. So you can give your data of face by using this file.
Now To run this project on pretrained model run the test file By using the following commands :
- windows user : python test.py
- linux user : python3 test.py
Happy 😀
Neutral 🙂
Surprise 😮
s
-
CV Notes 1 : This is the notes part one of computer vision lecture.
-
Cv Notes 2 : This is the notes part 2 of Computer vision Lecture.
-
Hill Climb Automation : This is the project of Computer Vision in which both openCv and Mediapipe is used.
-
OpenCv and Mediapipe : This is the code if someone wants to learn open cv and mediapipe from scratch.
- To run the code
- Open Jupyter Notebook
- Run the code using shift + enter
- To run the code