-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failure of TensorRT 8.6 on the PyTorch version of Faster-RCNN #3034
Comments
Looks like it trigger the TRT limitation
See https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_if_conditional.html and https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work-with-conditionals |
If the model has static shapes, have you tried constant folding, this may eliminate the error node. |
Also we have a old Faster RCNN sample(deprecated), see https://github.com/NVIDIA/TensorRT/tree/release/8.4/samples/sampleFasterRCNN |
Hi, thank you for your suggestion.
Yes, I'm working with static shapes (the onnx model is exported specifying a dummy input and the trtexec is run with the argument --explicitBatch). It is correct? polygraphy surgeon sanitize --fold-constants model_onnx.onnx -o folded.onnx but, the conversion of the sanitized model in tensorrt failed with the following error:
Any other suggestions on it? |
I check the onnx you provided, there are a lot of redundant ops, it comes from the pytorch source code. which make onnx folding hard to optimize and the error you seen, I think there should be many other problems with this onnx, so I would suggest using a new model, at least looks clean in ONNX, it will make the work much simple. |
I will try a simpler model or try to re-implement portions of the FasterRCNN architecture provided by PyTorch. |
Good luck :-D |
closing since no activity for more than 3 weeks, pls reopen if you still have question, thanks! |
@micheleantonazzi hi !Have you solved this problem? I encountered the same error as yours, which is also an error reported by the reshape operator。 |
Hi @zhurou603 |
facing same problem, the reshape op not support Shape[-1], cannot convert to trt model |
Description
I'm trying to convert the Pytorch Implementation of Faster-RCNN in TensorRT 8.6.
The procedure that I followed:
That procedure fails on the if node, which is generated from the MultiScaleRoIAlign class of torchvision.
The error is the following
Steps to reproduce:
you can also download the onnx model from here
Environment
TensorRT Version: 8.6
NVIDIA GPU: rtx 3050 mobile
NVIDIA Driver Version: 530
CUDA Version: 11.7 or 12 (both tested)
CUDNN Version: latest
Operating System: ubuntu 20.04
Python Version (if applicable): 3.8
PyTorch Version (if applicable): 2.0.0
Relevant Files
Model link: link
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): YESCould you help me to solve this issue? Thank you so much in advance
The text was updated successfully, but these errors were encountered: