We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is the ragged batching feature in Triton: https://github.com/triton-inference-server/server/blob/main/docs/user_guide/ragged_batching.md
And it also appears that FasterTransformer supports zero-padding mode for BERT inference: https://github.com/NVIDIA/FasterTransformer/blob/main/docs/bert_guide.md#model-architecture based on initial impl in https://github.com/bytedance/effective_transformer
This is also maybe supported by ORT (as "Effective Transformer"): https://github.com/microsoft/onnxruntime/blob/e7987a6b0ba429c0bec248c4a471e1782da4be6c/onnxruntime/python/tools/transformers/notebooks/PyTorch_Bert-Squad_OnnxRuntime_GPU.ipynb
Seems not supported by TRT: NVIDIA/TensorRT#4234
I propose to have a complete example in Triton repo of BERT inference using ragged batching and zero-padding mode:
The text was updated successfully, but these errors were encountered:
No branches or pull requests
There is the ragged batching feature in Triton: https://github.com/triton-inference-server/server/blob/main/docs/user_guide/ragged_batching.md
And it also appears that FasterTransformer supports zero-padding mode for BERT inference: https://github.com/NVIDIA/FasterTransformer/blob/main/docs/bert_guide.md#model-architecture based on initial impl in https://github.com/bytedance/effective_transformer
This is also maybe supported by ORT (as "Effective Transformer"): https://github.com/microsoft/onnxruntime/blob/e7987a6b0ba429c0bec248c4a471e1782da4be6c/onnxruntime/python/tools/transformers/notebooks/PyTorch_Bert-Squad_OnnxRuntime_GPU.ipynb
Seems not supported by TRT: NVIDIA/TensorRT#4234
I propose to have a complete example in Triton repo of BERT inference using ragged batching and zero-padding mode:
The text was updated successfully, but these errors were encountered: