-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ShowUI Slow speed on M1 16GB MBP? #46
Comments
Hi @aristideubertas, |
Hi @yyyang-2019 thanks for your answer. Any idea if lowering the resolution will help cut down the latency significantly? For many users it might be useful to have the inference run on a remote machine - is this something that you are thinking of supporting in the future? eg. remote showui api endpoint |
|
I am running ShowUI on my M1 MBP with 16GB of RAM and I have noticed that it is very very slow at performing actions when using ShowUI + GPT-4o. I am wondering if this is just on my machine, of it the M1 is not powerful enough to use this appropriately.
Basically, it is too slow to actually use. I have installed mps PyTorch so that shouldn't be a problem.
Any way I can benchmark it locally?
The text was updated successfully, but these errors were encountered: