You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 7, 2024. It is now read-only.
Thank you for the great work you do! In your paper, you achieved great performance with quantization, yet I only see a quantized model for iOS. I tried converting the frozen graph that's made with the export.py script with post-training quantization but I haven't had much luck. Would it be possible for you to upload the the quantized model as a frozen graph file? If not, would you be able to specify on how you converted your model? I followed the tutorial from https://blog.tensorflow.org/2019/06/tensorflow-integer-quantization.html using the NYU dataset, but the resulting tflite is much slower than the regular one.
Any help appreciated!
The text was updated successfully, but these errors were encountered:
I see, my apologies for misunderstanding what you wrote in the paper. If you only used export.py, then how were you able to achieve superior performance (in terms of FPS) compared to FastDepth? Currently, I'm getting ~7 FPS for FastDepth and ~2 FPS for mobilePydnet (192x192 image). After autotuning with TVM and deploying, this jumps to ~4 FPS. Both models tested on one core for a Raspberry Pi 4 overclocked to 2GHz. I see in the paper that you deployed the FastDepth model with the same degree of optimization on the iPhone, yet I would expect slightly better performance for mobilePydnet. Then again, you mention in #1 that your model runs on the GPU; thus, may I assume mobilePydnet is much better fit for inferring on mobile GPUs? I suspect this is the case given that FastDepth is supposed to run on CPUs. I hope to make some scripts and share them with you. Closing for now, feel free to share your insights!
Dear Filippo,
Thank you for the great work you do! In your paper, you achieved great performance with quantization, yet I only see a quantized model for iOS. I tried converting the frozen graph that's made with the export.py script with post-training quantization but I haven't had much luck. Would it be possible for you to upload the the quantized model as a frozen graph file? If not, would you be able to specify on how you converted your model? I followed the tutorial from https://blog.tensorflow.org/2019/06/tensorflow-integer-quantization.html using the NYU dataset, but the resulting tflite is much slower than the regular one.
Any help appreciated!
The text was updated successfully, but these errors were encountered: