Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I don't get how we can run predicitions? #9

Open
GXcells opened this issue Aug 29, 2024 · 7 comments
Open

I don't get how we can run predicitions? #9

GXcells opened this issue Aug 29, 2024 · 7 comments

Comments

@GXcells
Copy link

GXcells commented Aug 29, 2024

I don't get how to run segmentation with your scripts.
How can the model know what t osegment if we don'T provide any examples of image/segmentation pairs?
Do I need first to finetune/train on my dataset for each segmentation task? (for example a specific cell type on histology images, or a specific immunohistochemistry staining, etc...) and then run "test"?

@xiongxyowo
Copy link
Collaborator

Hi, you need to train on your own datasets first because SAM2-UNet does not have zero-shot capability (the original prompt encoder and decoder of SAM2 is removed).

@GXcells
Copy link
Author

GXcells commented Aug 29, 2024

Ok, I just trained one model on a dataset but now if I want to run predicitons, the only script available is test.py . But there is a groundtruth required argument. How can I thus run a prediction on images that are not yet segmented? Thanks in advance

@xiongxyowo
Copy link
Collaborator

Hi, our test dataset loads ground truths to make it easier to align the predictions with the shapes of GTs (see here). You can make simple modifications to the code to make the testing process independent of ground truths.

@GXcells
Copy link
Author

GXcells commented Aug 29, 2024

Ok, thanks a lot.

I modified and it is working without goundtruth.

But I am convinced that it would be important that you provide an inference code independent of groundtruth and that yo upeform again your benchmarks with it. Because in real use case, we generally never have groundtruth during inference (if we have ground truth of the images that we want to segment then why segmenting?)

@xiongxyowo
Copy link
Collaborator

Hi, thank you for the suggestion. We follow the common practice of up-sampling the prediction results to the GTs' size (see test codes in PraNet and FEDER). Since metrics computed at low resolutions can differ from those at the original resolutions, we perform up-sampling to ensure a fair comparison with existing methods. For users who wish to eliminate this logical flaw, we recommend up-sampling the predictions to the input image resolution instead.

@xiongxyowo
Copy link
Collaborator

Note: The resolution of some test images in public datasets may be different from the corresponding GTs, so this modification may cause some anomalous performance.

@GXcells
Copy link
Author

GXcells commented Aug 29, 2024

Hi, thank you for the suggestion. We follow the common practice of up-sampling the prediction results to the GTs' size (see test codes in PraNet and FEDER). Since metrics computed at low resolutions can differ from those at the original resolutions, we perform up-sampling to ensure a fair comparison with existing methods. For users who wish to eliminate this logical flaw, we recommend up-sampling the predictions to the input image resolution instead.

Thanks for the explanations.
I'm a wet lab scientist and don't deeply understand how exactly unet and other machine learning models are working.
That is why I was asking question that was more related to a direct implementation of your training for analysis of data that we have in the lab.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants