Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT EP failed to create engine from network. #982

Open
xlg-go opened this issue Jul 23, 2024 · 1 comment
Open

TensorRT EP failed to create engine from network. #982

xlg-go opened this issue Jul 23, 2024 · 1 comment

Comments

@xlg-go
Copy link

xlg-go commented Jul 23, 2024

Description

I have a YOLOv8 detection model deployed using .NET with TensorRT as the provider. However, when I execute it, I encounter an error. Previously, when using TensorRT version 8, there were no issues.

microsoft/onnxruntime#21415

ErrorCode:ShapeInferenceNotRegistered] Non-zero status code returned while running TRTKernel_graph_torch_jit_4528351051880633562_0 node. Name:'TensorrtExecutionProvider_TRTKernel_graph_torch_jit_4528351051880633562_0_0' Status Message: TensorRT EP failed to create engine from network.

Environment

TensorRT Version: 10.2.0.19
ONNX-TensorRT Version / Branch: https://github.com/onnx/onnx-tensorrt/archive/06adf4461ac84035bee658c6cf5df39f7ab6071d.zip
GPU Type:
Nvidia Driver Version: 550.100
CUDA Version: 12.5.1
CUDNN Version: 9.2.1
Operating System + Version: ubuntu20.04
Python Version (if applicable): 3.9
TensorFlow + TF2ONNX Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

GPU

NVIDIA Quadro P6000 (Pascal)

Relevant Files

Steps To Reproduce

@xlg-go
Copy link
Author

xlg-go commented Jul 24, 2024

NVIDIA/TensorRT#3826

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant