Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'tensorflow.contrib.tensorrt' has no attribute 'calib_graph_to_infer_graph' when trying to Calibrate #106

Closed
sehHeiden opened this issue Jul 29, 2019 · 7 comments

Comments

@sehHeiden
Copy link

Hi,

I am trying TensorRT. It works fine as long as I don't try getting the INT8 to work. I got two problems:

1.: is similar to issue #56
2.: occures when trying to calibrate an Resnet v1.5 or SSD-ResNet-1.5. For reproduceability: I used the run_all.sh script from this side.

Maschine:
Ubuntu 18.04, Intel(R) Core(TM) i7-6850K, NVidia 1070, also tried it with the Quadro 5000
Cuda: 10.0
CuDNN: 7
Tensorflow: compiled from source v1.13.1 with TensorRT 5.1.2

When running run_all.sh I get the error:

Traceback (most recent call last):
File "tftrt_sample.py", line 306, in
int8Graph=getINT8InferenceGraph(calibGraph)
File "tftrt_sample.py", line 137, in getINT8InferenceGraph
trt_graph=trt.calib_graph_to_infer_graph(calibGraph)
AttributeError: module 'tensorflow.contrib.tensorrt' has no attribute 'calib_graph_to_infer_graph'

When I curcumvent the line 137 in my own code and don't calibrate the net, the conversion is still working. At least I don't get any errors, but as described in issue #56 the execution for INT8 is almost as slow as for nor converted FP32 and much slower than converted FP32 and FP16.

I tried it with the Quadro 5000 card for both ResNet and SSD-ResNet. With the same outcome and with a compiled Tensorflow v1.14 against TensorRT 5.1.5 same outcome (tested for ResNet) for both Problems.

With kind regards

@zhangqijun
Copy link

just get answer.
TF 1.14 change method
from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverter(input_saved_model_dir="")
converter.convert()
converter.save()

@pooyadavoodi
Copy link

As @zhangqijun said, TF1.14 has a new API for calibration.
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#post-train

Closing. Please reopen if the problem persists.

@mminervini
Copy link

I find myself in a similar situation as @Meresmata...
I have a workflow to go from a Keras model to a TF-TRT model for deployment, that is based on freezing a graph and finally using create_inference_graph() to obtain the TRT graph. This works for FP32 or FP16. Instead, for INT8 conversion the resulting model is slower than FP32, because (as far as I understood) I am missing the calibration step, however I cannot find the function calib_graph_to_infer_graph() that is adopted in many blog posts and example codes (was it removed from v1.14?).
I tried to migrate to the approach suggested by @zhangqijun, but I am stuck with converter.convert() that fails with this exception:

InvalidArgumentError: Failed to import metagraph, check error log for more info.

I tried several solutions that I found around on the Internet, however the approach that got me the furthest seems to be the one in this post on the Nvidia developer forum.
When I instantiate the TrtGraphConverter I provide the path to the model directory:

converter = trt_convert.TrtGraphConverter(input_saved_model_dir=export_path)

However this results in the InvalidArgumentError when I call converter.convert().
I also tried to load the model with ParseFromString():

frozen_graph = tf.compat.v1.GraphDef()
with tf.io.gfile.GFile(os.path.join(export_path, 'model_frozen.pb'), 'rb') as f:
    frozen_graph.ParseFromString(f.read())
converter = trt_convert.TrtGraphConverter(input_graph_def=frozen_graph.as_graph_def())
trt_graph = converter.convert()

but I ended up with the same error.

  1. It's not clear to me how can I check the error log mentioned in the exception.
  2. How am I supposed to save/load a Keras model in a suitable way for TrtGraphConverter?

@cainiaoxy
Copy link

I find myself in a similar situation as @Meresmata...
I have a workflow to go from a Keras model to a TF-TRT model for deployment, that is based on freezing a graph and finally using create_inference_graph() to obtain the TRT graph. This works for FP32 or FP16. Instead, for INT8 conversion the resulting model is slower than FP32, because (as far as I understood) I am missing the calibration step, however I cannot find the function calib_graph_to_infer_graph() that is adopted in many blog posts and example codes (was it removed from v1.14?).
I tried to migrate to the approach suggested by @zhangqijun, but I am stuck with converter.convert() that fails with this exception:

InvalidArgumentError: Failed to import metagraph, check error log for more info.

I tried several solutions that I found around on the Internet, however the approach that got me the furthest seems to be the one in this post on the Nvidia developer forum.
When I instantiate the TrtGraphConverter I provide the path to the model directory:

converter = trt_convert.TrtGraphConverter(input_saved_model_dir=export_path)

However this results in the InvalidArgumentError when I call converter.convert().
I also tried to load the model with ParseFromString():

frozen_graph = tf.compat.v1.GraphDef()
with tf.io.gfile.GFile(os.path.join(export_path, 'model_frozen.pb'), 'rb') as f:
    frozen_graph.ParseFromString(f.read())
converter = trt_convert.TrtGraphConverter(input_graph_def=frozen_graph.as_graph_def())
trt_graph = converter.convert()

but I ended up with the same error.

  1. It's not clear to me how can I check the error log mentioned in the exception.
  2. How am I supposed to save/load a Keras model in a suitable way for TrtGraphConverter?

I think i have met the same issue in tf1.14. Did you fix the problem?

@liuxingbin
Copy link

any progress? I met the same problem:
ValueError: Failed to import metagraph, check error log for more info.

@DEKHTIARJonathan
Copy link
Collaborator

@liuxingbin please update to latest TF2. TF1 is more than two years old, even if there was a bug we couldn't release a fix.

@liuxingbin
Copy link

@liuxingbin please update to latest TF2. TF1 is more than two years old, even if there was a bug we couldn't release a fix.

Actually, I did. My tf is 2.4. Please refer to this question:
#325

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants