-
Notifications
You must be signed in to change notification settings - Fork 685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow Gencast Inference on GPUs after 1st run #106
Comments
Hello! Apologies, the demo notebook implementations have an oversight here. You might notice that upon re-running the rollout cell ( This is because the
Will send a fix to the repo ASAP, but thought I'd respond here first. Thanks! Andrew |
Many thanks for the help! Just to confirm: does the 8 minutes stated in the paper refer to time taken to:
|
The latter (on a TPUv5 and without compilation/tracing costs). (Note that since we produce our forecasts autoregressively, the time taken to generate the 30th step - i.e. your former option - is the same as the time to produce all the intermediate steps since they are needed to feed back in as inputs!). |
Thank you! I haven't yet seen an increase in run time tbh, but will re-run some experiments and see if I actually made mistakes, will get back to you! |
Many thanks for making Gencast code and weights public!
I managed to tweak the code in “gencast_demo_cloud_vm.ipynb” and got it running on a 8-GPUs (H100) cluster, to generate forecasts up to 15 days with 12 hours interval, with 8 ensembles.
First run took around ~35 minutes which is expected, however when I ran it the second time, it still took around ~30 - 35 minutes. Not sure if this is expected behaviour because I thought there is a fixed-time cost only when running the first time, and further runs will take only about ~8 minutes?
Or is that only applicable to using TPUs or only when I generate a single forecast e.g 15 days out rather than the entire sequence?
The text was updated successfully, but these errors were encountered: