-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failure of TensorRT 8.6 when running inferencing on GPU NVIDIA GeForce RTX 3070 with error [convBaseRunner.cpp::execute::295] Error Code 1: Cask (Cask convolution execution) #3658
Comments
Closing the issue because I made a silly mistake in code and now it is working. |
@sandeepgadhwal what was your silly mistake? asking for a...friend :) |
Hi @patrickhulce, I had allocated wrong size of memory. After allocating correct amount of memory it worked fine. I cross checked the number of bytes allocated. |
Hi @sandeepgadhwal , I have encountered the same error, how did you find out the "allocate wrong size of memory" issue? the context.execute_v2() API doesn't print any useful log to help debugging... |
To whom run into similar issue, @sandeepgadhwal has mentioned it's related to the memory buffer you created. |
Description
I try to save a onnx model to trt engine, works well. But when loading it to python API it fails while in context.execute_v2(self.allocations)
Environment
TensorRT Version: 8.6.1 (nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-11.8_1.0-1_amd64.deb)
NVIDIA GPU: NVIDIA GeForce RTX 3070
NVIDIA Driver Version: 535.154.05
CUDA Version: 12.2
CUDNN Version:
Operating System:
Python Version (if applicable): 3.10
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):
Relevant Files
Model link:
Steps To Reproduce
I converted the model using this command
I test the engine using this command
all works fine
But when i try to load engine through python API it does not work and results in the following traceback.
a.txt
Commands or scripts:
Script i am using for inferencing:
tensorrt_test_fps.py.txt
.
Have you tried the latest release?: yes
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): yes works fine with onnxruntime with Tensorrt Execution backend.The text was updated successfully, but these errors were encountered: