Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can What-if-tool handle multimodal structural data which has continuous, categorical, text and image columns? #197

Open
wjlgatech opened this issue Feb 1, 2022 · 2 comments

Comments

@wjlgatech
Copy link

Can What-if-tool handle multimodal structural data which has continuous, categorical, text and image columns?

Could you point me to some examples on how to set up wit for such TF multimodal model?

Thanks!

@jameswex
Copy link
Collaborator

jameswex commented Feb 1, 2022

WIT should be able to handle all those input types in a single datapoint just fine. Your code might look like some combination of https://colab.sandbox.google.com/github/PAIR-code/what-if-tool/blob/master/WIT_Smile_Detector.ipynb (images), https://colab.sandbox.google.com/github/pair-code/what-if-tool/blob/master/WIT_Toxicity_Text_Model_Comparison.ipynb (text), and https://colab.sandbox.google.com/github/pair-code/what-if-tool/blob/master/WIT_COMPAS_with_SHAP.ipynb (continuous and categorical)

Another option would be to look into using LIT, which is more full-featured and has better multi-modal support, and is being actively developed (https://pair-code.github.io/lit/)

@wjlgatech
Copy link
Author

@jameswex Amazing! Thanks for your specific, point to point feedback. I will check out those resource and update to the WIT community what it ends up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants