You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there support in Coqui TTS for converting models to a quantized ONNX format for stream inference? This feature would enhance model performance and reduce inference time for real-time applications.
Solution
Implement a workflow or tool within Coqui TTS for easy conversion of TTS models to quantized ONNX format.
Alternative Solutions
Currently, external tools like ONNX Runtime or TensorRT can be used for post-conversion quantization, but having this feature natively would streamline the process.
Additional context
Any existing documentation or insights on this topic would be appreciated. Thank you!
The text was updated successfully, but these errors were encountered:
🚀 Feature Description
Is there support in Coqui TTS for converting models to a quantized ONNX format for stream inference? This feature would enhance model performance and reduce inference time for real-time applications.
Solution
Implement a workflow or tool within Coqui TTS for easy conversion of TTS models to quantized ONNX format.
Alternative Solutions
Currently, external tools like ONNX Runtime or TensorRT can be used for post-conversion quantization, but having this feature natively would streamline the process.
Additional context
Any existing documentation or insights on this topic would be appreciated. Thank you!
The text was updated successfully, but these errors were encountered: