You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently utilizing the vit_large_patch16_224.augreg_in21k pretrained model from the TIMM library.
While attempting to fine-tune the model with my custom dataset, I have encountered some difficulties.
Could you kindly provide guidance on why there might be differences in the learning outcomes between using the TIMM-pretrained model and a self-defined one?
For your reference, I have included the code I implemented below.
I would greatly appreciate your assistance in resolving this issue.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am currently utilizing the vit_large_patch16_224.augreg_in21k pretrained model from the TIMM library.
While attempting to fine-tune the model with my custom dataset, I have encountered some difficulties.
Could you kindly provide guidance on why there might be differences in the learning outcomes between using the TIMM-pretrained model and a self-defined one?
For your reference, I have included the code I implemented below.
I would greatly appreciate your assistance in resolving this issue.
VIT.zip
Beta Was this translation helpful? Give feedback.
All reactions