node.output().size() == 1 && "TensorRT does not support the indices output in MaxPool!" failure of TensorRT 8.6 when running ONNX model on GPU RTX 3090 #3761
Labels
triaged
Issue has been triaged by maintainers
Description
I tried to run the attached ONNX model through trtexec, but it fails with the error:
node.output().size() == 1 && "TensorRT does not support the indices output in MaxPool!"
. This is because MaxPool is being used in the model to produce 2 outputs: the values and indices. Upon checking the officially supported ONNX operators I see that MaxPool layer withindices
output is not supported even in TensorRT 10 EA. Is there any plan to support this or workaround for this issue?Environment
TensorRT Version: 8.6
NVIDIA GPU: RTX 3090
NVIDIA Driver Version: 545
CUDA Version: 11.6
CUDNN Version: 8.8
Operating System: Ubuntu 20.04 64-bit
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):
Relevant Files
Model link: https://drive.google.com/file/d/1ZvhQiG9NT92CbVK7oAznBR_rAlF5AzzF/view?usp=sharing
Steps To Reproduce
Run
trtexec --onnx=model.onnx
Commands or scripts: Run
trtexec --onnx=model.onnx
Have you tried the latest release?: N/A
Can this model run on other frameworks? N/A
The text was updated successfully, but these errors were encountered: