As an influence of the COVID 19 pandemic, everyone is required to wear a facial mask. This made it hard to identify the user’s face using the present facial recognition systems with portable devices. Using deep learning, our project resolves this problem by creating a system that identifies the user accurately and efficiently, even when the face is covered by an accessory.
The model identifies the accessory the user is wearing. The model used the MobileNet V2 architecture with the weights of ImageNet. Fine tunined the model so that it only classifies the wanted accessories.
A pretrained ResNet10 is used to detect the ROI of the face and FaceNet is used for facial recognition. FaceNet is trained with the images of the registered users. The 128-D embeddings extracted from the face on each image which are used to determine the features of each user. This is done by using the triplet loss function. The model determines the identifies the person in the input image by comparing their embeddings and the features of the registered users,
If the face of the input image is covered by an accessory, the accuracy of the facial recognition is returned low. The image composition process resolves this problem by recreating the face of the input image into an uncovered face. The given graphs compares two situations: compositing images with the same user’s image and a different user’s image. The graphs show that compositing the image with the same user gives a higher accuracy.
Excecution code : python train_accessory_detector.py --dataset dataset_accessory
The directory dataset_accessory has the datasets of default faces and faces with masks. You may add more images into each directories.
If you want to add the type of accessories to detect, create a new directory and add images of faces wearing it.
=> Output : accessory_detector.model
nn4.small2.v1.t7 is copyright Carnegie Mellon University and licensed under the Apache 2.0 License.
- extract_embeddings_default.py
Excecution Code : python codes/default/extract_embeddings_default.py --dataset dataset --embeddings output/embeddings_default.pickle --detector face_detection_model --embedding-model openface_nn4.small2.v1.t7 - train_model_default.py
Excecution Code : python codes/default/train_model_default.py --embeddings output/embeddings_default.pickle --recognizer output/recognizer_default.pickle --le output/le_default.pickle
Test the outputs of step4 and step4-1 with cameras.
Excecution Code : python codes/mask/recognize_video_mask.py --detector face_detection_model --embedding-model openface_nn4.small2.v1.t7 --recognizer output/recognizer_mask.pickle --le output/le_mask.pickle
shape_predictor_68_face_landmarks.dat is licensed under the Boost Software License - Version 1.0.
nn4.small2.v1.t7 is copyright Carnegie Mellon University and licensed under the Apache 2.0 License.
From ../accessory_detector/accessory_detector.model to ../main_module/
From ../output_creator/output/ to ../main_module/output/
Excecution Code : python zzampong.py --embedding-model openface_nn4.small2.v1.t7 --recognizer output/recognizer_default.pickle --le output/le_default.pickle --recognizer_mask output/recognizer_mask.pickle --le_mask output/le_mask.pickle
If the marked score is higher than the pre-decided threshold, the system prints out the decided user’s name and device’s status(lock/unlock) to unlock.