HGB
HGB
Hmm, that's true. Do you have any workarounds in mind that could work in the meantime to handle this? From my understanding, implementing CPU offloading at the HLO graph level...
Thanks for the detailed explanation, @JackCaoG. Given the limitations with partitioning graphs and the potential performance hit, do you think it would be feasible for your team to implement something...
Do you not have the resource to work with? I can certainly provide you some of the tpu pods that I am not using at the moment? @JackCaoG
How long would it take for someone on your team to implement this? I really need this to work to move forward and am currently blocked because of this issue....
How to do slide captioning on multiple gpus?
@YoungjaeDev Thank you, Do you have the slide captioning batch inference working for the ShareCaptioner-Video model? I'm looking at the code right now, trying to set up inference on a...
I'm experimenting to see if I can add support for TPU/XLA devices within the comfy code myself. If possible, I can try to open a PR to add support for...
hi @gabriel-montrose, I have just created a PR. https://github.com/comfyanonymous/ComfyUI/pull/5657
If anyone needs TPU/XLA devices support for ComfyUI please refer to my own fork [ComfyUI-TPU](https://github.com/radna0/ComfyUI-TPU). I shall maintain it as long as there's interest.
Hey @krisheetu , yes! I would be happy to!