-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Reproduce] Cannot Reproduce the results. #32
Comments
Could you provide further information about your results? For example, which checkpoint did you use and did you fine-tune the provided weights? |
Hi, Chang
I download the ckpt from MAE, pretrain and fineturn in FSC with your code.
The environment you provided cannot be installed exactly the same, I
intsall the closest versions.
I test with the 'checkpoint__finetuning_minMAE.pth'
Cheers,
Yuhao
Chang Liu ***@***.***> 于2023年7月12日周三 17:14写道:
… Could you provide further information about your results? For example,
which checkpoint did you use and did you fine-tune the provided weights?
—
Reply to this email directly, view it on GitHub
<#32 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALAFVAYNAOAJN33VHGTQIRDXPZIVPANCNFSM6AAAAAA2HBULHU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I am not sure what the problem could be now. Maybe you can try whether the fine-tuned weights I provided could produce the correct results? If the results are close, the environment should be OK. After that, you can also try to visualize the performance of the model after MAE pre-training, and check whether the model has learned to reconstruct the masked images. |
Hi
I use the wanb to record, the visulisation is okay, both reconstruction and
density map.
I dont think package version can lead so large gap.
How many times did you repeat your experiment? According to the issues on
github, the tolerance seems to be large.
Also, i will try the ckpt you provided.
Cheers
Chang Liu ***@***.***> 于2023年7月12日周三 17:31写道:
… I am not sure what the problem could be now.
Maybe you can try whether the fine-tuned weights I provided could produce
the correct results? If the results are close, the environment should be OK.
After that, you can also try to visualize the performance of the model
after MAE pre-training, and check whether the model has learned to
reconstruct the masked images.
—
Reply to this email directly, view it on GitHub
<#32 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALAFVA4OZBHIR7EQ3BP6MCTXPZKWTANCNFSM6AAAAAA2HBULHU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I think an MAE difference between -1~1 should be OK. But your result differs too much..... Besides, I have another suggestion. You can remove this line to unfreeze the encoder, and this will lead to an improvement in performance. But I don't think this will cause such a large performance gap. |
Hello, I am trying to reproduce the results too, but I can not achieve the same results with the provided setting and pretrained model. I get MAE: 14.98, RMSE: 106.51 on FSC dataset. |
Hello, I am not quite sure what the problem could be. Maybe you can try to remove this line and unfreeze the image encoder? You can also use a smaller learning rate and check the results of some earlier checkpoints. |
Hello. Thank you for the great work.
However, under the given instructions with the pretrained weight, I could not reproduce the reported result on FSC-147.
I followed your instructiond and my test results are "Current MAE: 23.07, RMSE: 104.73".
My results are so far away from your results in the paper. Is there any hyper-parameters different from yours?
The text was updated successfully, but these errors were encountered: