You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
They used 3D Swin transformer to encode data, trained on era5, gfs, cmip6, and more. They then used 3D perceiver transformer for the processing and decoding, as well as LoRA in the fine tuning stages for longer rollout.
One of the few ones not trained just on ERA5.
Context
The text was updated successfully, but these errors were encountered:
Arxiv/Blog/Paper Link
https://www.microsoft.com/en-us/research/blog/introducing-aurora-the-first-large-scale-foundation-model-of-the-atmosphere/
Detailed Description
They used 3D Swin transformer to encode data, trained on era5, gfs, cmip6, and more. They then used 3D perceiver transformer for the processing and decoding, as well as LoRA in the fine tuning stages for longer rollout.
One of the few ones not trained just on ERA5.
Context
The text was updated successfully, but these errors were encountered: