You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there,
Congrats for the nice work and thanks for providing the code.
I have a question about the experiments you conducted on downstream tasks (detection and segmentation).
For the detection/segmentation results reported in Table 3, did you perform SSL on ImageNet-1K and then use the models as backbones and simply train on COCO? No SSL on COCO data, right?
And if so, could that be a reason why the MoBY model is not outperforming the supervised model?
What I'm trying to understand is if we can expect a model which is SSL-trained on a large unannotated data, and then trained on the downstream tasks on a portion of the same data (which is labeled) to perform significantly better than a model which is solely trained in a supervised fashion on the annotated portion? Any insight is appreciated.
Best,
The text was updated successfully, but these errors were encountered:
Hi there,
Congrats for the nice work and thanks for providing the code.
I have a question about the experiments you conducted on downstream tasks (detection and segmentation).
For the detection/segmentation results reported in Table 3, did you perform SSL on ImageNet-1K and then use the models as backbones and simply train on COCO? No SSL on COCO data, right?
And if so, could that be a reason why the MoBY model is not outperforming the supervised model?
What I'm trying to understand is if we can expect a model which is SSL-trained on a large unannotated data, and then trained on the downstream tasks on a portion of the same data (which is labeled) to perform significantly better than a model which is solely trained in a supervised fashion on the annotated portion? Any insight is appreciated.
Best,
The text was updated successfully, but these errors were encountered: