You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am in the process of deploying sparse4d to edge devices and have a few questions regarding the key considerations to keep in mind during this process.
ONNX Conversion: What are the best practices or specific steps needed to convert sparse4d models to ONNX format? Are there any known issues or common pitfalls that I should be aware of during this conversion?
Deployment vs. Training - Timestamp Diff: I have noticed differences in timestamp diffs during deployment and training, e.g. 0.5s in training and 0.1s in inference. Can you provide insights on how to handle them effectively? Are there any specific configurations or adjustments that need to be made to ensure consistent performance across real-time inference and training?
Any guidance or recommendations would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered:
Hi,
I am in the process of deploying sparse4d to edge devices and have a few questions regarding the key considerations to keep in mind during this process.
ONNX Conversion: What are the best practices or specific steps needed to convert sparse4d models to ONNX format? Are there any known issues or common pitfalls that I should be aware of during this conversion?
Deployment vs. Training - Timestamp Diff: I have noticed differences in timestamp diffs during deployment and training, e.g. 0.5s in training and 0.1s in inference. Can you provide insights on how to handle them effectively? Are there any specific configurations or adjustments that need to be made to ensure consistent performance across real-time inference and training?
Any guidance or recommendations would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered: