Simple implement inference deep head pose ncnn version with high performance and optimized resource. This project based on deep-head-pose project by Nataniel Ruiz. And detail paper in CVPR Workshop. I use Retinaface for face detection step.
- Official ncnn document shown in detail how to use and build ncnn for arbitrary platform.
- And if you use my docker environment, i was build ncnn library inside docker environment with path:
/home/ncnn_original/build
it's contain ncnn shared static library and tools for convert and quantize ncnn models.
- As original deep head pose project used Pytorch framwork. So, We need convert Pytorch model to ncnn model.
- In ncnn wiki had detailed this guide here. After convert pytorch to onnx format, you have to use ncnn build tools to convert onnx->ncnn. Inside my docker env, ready to use in
/home/ncnn_original/build/tools/onnx
- Notice, Netron support to visualize network in intuitive easy to get
input node
andoutput node
as specification of ncnn
git clone https://github.com/docongminh/ncnn-deep-head-pose
cd ncnn-deep-head-pose
- execute env:
docker exec -it deep-head-pose bash
- cd to source mounted:
cd /source
- cd to ncnn build library:
cd /home/ncnn_original
-
In project root inside docker: `mkdir -p build && cd build
-
Cmake and build:
cmake ..
&&make
-
Run test:
./main
-
Examples:
- create extractor instant
- normalize image
- resize image
- This project in progress. So, it has many coding performance issues during develop process.
- Quantized model version will be update ASAP.