Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High-level question: is SAM-HQ suited for fine-tuning on specific tasks? #140

Open
SimonCoste opened this issue Aug 6, 2024 · 2 comments

Comments

@SimonCoste
Copy link

Hi, thanks @lkeab and ETH's vision team for sourcing this project !!!

From my understanding, your method (the HQ token and its integration) is lightweight and was trained at low computational cost using the HQSeg44k dataset, which is very diverse with only common feature the high quality of the labels. As a result, SAM-HQ is a purely better version of SAM, with same or better generalization and a better segmentation quality.

Your method could also be used to fine-tune SAM on very specific tasks. But is it well adapted in this context, where we are more interested in this tasks's specific performance than in the generalization or zero-shot performance of the model? Do you have any insights on how to use your method for task-specific fine-tuning?

Will appreciate any comment on this topic!
[@lkeab I'm posting on GH because people might be interested in your views on this!]
Thanks,
Simon

@lkeab
Copy link
Collaborator

lkeab commented Aug 7, 2024

Hi Simon, for task-specific finetuning of SAM, you can allow more model parameters to be trainable based on hq-sam's current training setup, such as adding some lora layers into the mask decoder or image encoder, or just unfreeze the whole mask decoder or more blocks in the image encoder. Also, I would suggest to set hq_token_only = True during the inference after successfully finetuning the model.

@SimonCoste
Copy link
Author

Thanks for your answer !

I also have another question : in my task, each image is endowed with various masks ; think about items like people on a picture. Each picture can contain between 1 and 10 people. So currently, for each image (image.jpg) I have a binary mask (image.png) gathering all the different masks ; would you recommend splitting this mask into many different masks ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants