A Facial Expression Recognition Model trained on multiple dataset. So far:
create a conda environment by:
conda create -n faceexpr python=3.10
conda activate facexpr
and then instal requirements:
pip install -r requirements.txt
- For the CKPlus dataset
- Create
dataset
folder:
mkdir dataset
- Unzip the
CK+.zip
dataset and put the folderCK+
in thedataset
folder - Modify the config file
ckplus.yml
, placed in the./configs/
directory, if needed - training:
python train_ck.py
- For the Emotic dataset
- Create
dataset
folder:
mkdir dataset
- Unzip the
emotic.zip
in thedataset
folder, you will have a folder namedEmotic
, in which there will beemotic
folder that has 4 folders inside - Unzip the
Annotations.zip
in theEmotic folder
, you will have a folder namedAnnotations
- You will have the following structure then:
├── ...
│ ├── emotic
│ | ├── ade20k
│ | ├── emodb_small
│ | ├── framesdb
│ | ├── mscoco
│ ├── Annotations
│ | ├── Annotations.mat
- To convert annotations from mat object to csv files and preprocess the data:
python ./codes/mat2py.py --data_dir ./dataset/Emotic/
See this repo for more info
- Modify the config file
emotic.yml
, placed in the./configs/
directory, if needed - training:
python train_emotic.py
All the logs and weights will be stored in logs/
folder (look at the train scripts args for more detail)
to be added soon ...
The motion_encoder.py in the codes folder contains the neural network. It should be modified and is not compatible yet to all datasets