Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VNet C++ inference #16

Open
ClementLagorce opened this issue Jun 12, 2020 · 3 comments
Open

VNet C++ inference #16

ClementLagorce opened this issue Jun 12, 2020 · 3 comments

Comments

@ClementLagorce
Copy link

ClementLagorce commented Jun 12, 2020

Hello,

I succeeded to compile tensorflow 1.8.0 with ITK 4.13 and protobuf 3.5.0 and the project in cxx folder of this repo. But when I run the main.cxx code with Visual Studio 2015, I got an error from tf_inference.cpp line 143 which write this in my consol :
Non-OK-Status: m_sess->Run({}, vNames, {}, &out) status: Invalid argument: must specify at least one target to fetch or execute.

I am not familiar with tensorflow c++ api and with the "session.run" code provided by tensorflow so someone have an idea how to solve it ?

Thanks !

@jackyko1991
Copy link
Owner

This is an old method to run the code in C++ with tensorflow. Check the node names in the checkpoint and graph file are identity.

Line 143 is to load the weighted values from the checkpoint file to name specific tf operation nodes.

I recommend to use TensorRT API for inference purposes. Take a look at the documentation: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#using-metagraph-checkpoint (2.2.3 TF-TRT 1.x Workflow With MetaGraph And Checkpoint Files) to convert ckpt and meta to .pb file.

After getting a frozen graph convert it to uff format: https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_401/tensorrt-api/python_api/workflows/tf_to_tensorrt.html

Then inference with TenorRT API: https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMNIST

@ClementLagorce
Copy link
Author

Hello,

Thank you for the answer !

I tried to convert the frozen graph to uff format with your trained VNet implementation but it fail because there is no conversion function for some layers.
Could it come from the version of tensorflow used for the training step ?

Thanks !

@ClementLagorce
Copy link
Author

Ok I ran your VNet on TF 1.8 and generated the pb graph with your meta_to_pb.py code also with TF 1.8. So now the line 143 of c++ inference is runing but the code stop to run without printing any error when it pass on the line (414, 415):

auto statusPred = m_sess->Run(input, { "predicted_label/prediction:0" }, {}, &predict);

The execution just stop after this without any error so I don't understand well what is going on. I am sure it is possible to run this old method c++ inference.

If you have any idea, thank you in advance !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants