Replies: 2 comments 4 replies
-
A request I forgot to put in the initial post. For use cases like mine where I brought a very full featured comfyui into stableswarm, instead of forcing me to replicate the whole set of model loading paths, for when we send the comfy workflow into the Generate tab, maybe in that use case we can just ... trust the model paths that comfy gave? I noticed it just switches out the model for the first one in the list (stableswarm's known models list) so it leads to discrepancies. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Took much less work than I expected to spin this thing up. My approach was
Couple of notes on little issues I had to address, with this somewhat custom setup, on my way to generating the first image:
First, API access fails because the docker launch has no network access. So even to access the local comfyui backend on localhost I had to add
--network=host
suggest addition of flags to docker run
--rm
and--name stableswarmui
for consistent docker container namingI found that the path of the model specified in stableswarm must match up with the path seen by comfy. This alignment is surely taken care of for you with a ComfyUI Self-Starting backend setup. I just wanted to bring up that the log output of
could be a lot better, aka in particular I would make an effort to parrot the error comfyui would emit, e.g. something like
(dear lord i have 614 models ouch)
... As I was able to get past this error by ensuring via bind mounts that the model paths seen by stableswarm match up perfectly with what comfyui sees.
I'm not going to go too far into questions since I have a lot more exploration to do ahead of me. But one of the things I hope to answer first is whether I can get up an effective multiGPU solution for the comfyui workflow experiments. I'm talking about like some really complex workflows. They are the ones that can benefit the most from a GPU swarm. Generating a batch of 16 or 25 images with ADv3 or SVD already takes about a minute each. Then add on upscaling and other touches to that.
I was reading this page and the last part about the "colors" is intruiging. Color is a built in comfy feature to visually style a node. This appears to be a silly/fun piggyback on the output node to communicate for swarm to control dispatch of workflows across GPUs, without requiring a functional extension to comfyui, which is clever. I'll need more time to play with it but I'll go ahead and request an overview of how this is supposed to work. According to this document, if I have 5 GPUs and define 5 separate workflow branches including ksampler, each leading to an output node with a different color, what this will mean is i could within the comfy workflow interface specify 5 specific custom sets of inputs for those ksamplers (and all the rest of the nodes really) and by launching once get 5 parallel generations across 5 GPUs (or, say 3 GPUs round-robining that job, i hope, if i only have 3). That would be FREAKING AWESOME. I'd like to know how that is implemented because from where I'm standing, you would need to walk through the workflow node graph and re-generate (with e.g. a DFS) a separate workflow to dispatch to each comfyui backend for actual execution. If this is true this should basically be a full realization of the holy grail multigpu implementation for ComfyUI. The only way to push this any further would be if you had certain workflow edges that you could declare as being possible to cut over to continue from a different backend, transmitting it over the network. That would be really cool but can be hard to deal with, since network performance can vary wildly
...If that's not how it works then I assume the way it works is just to have the full workflow launched on each backend. I mean, I'd be quite happy just getting that.
Thanks for everyones hard work on this software and SAI for embracing open source.
Beta Was this translation helpful? Give feedback.
All reactions