Ximin Luo

Results 364 comments of Ximin Luo

Also, both InvokeAI and Fooocus are using PyTorch/ROCm, so what I am asking for is clearly possible. Someone more familiar with the code could probably have a look at how...

I'm running `python3 entry_with_update.py`. Problem occurs with any of the flags - `--always-offload-from-vram` - `--always-high-vram` as well as - `--always-low-vram` Example usage `12940M / 16165M VRAM 80.05%` which goes back...

`--always-offload-from-vram` **doesn't work**.

Are you saying you don't believe bug reports until at least 1 other person have corroborated?? I don't see every issue being duplicated in "Discussions" in this way, but alright...

I have asked the community here: https://github.com/lllyasviel/Fooocus/discussions/3258

The current code intentionally does not free memory on ROCm, with a comment "seems to make things worse on ROCm". [ldm_patched/modules/model_management.py#L769](https://github.com/lllyasviel/Fooocus/blob/5a71495822a11bbabf7c889eed6d9b38b261bb96/ldm_patched/modules/model_management.py#L769) - [blame](https://github.com/lllyasviel/Fooocus/blame/5a71495822a11bbabf7c889eed6d9b38b261bb96/ldm_patched/modules/model_management.py#L769), [original commit](https://github.com/lllyasviel/Fooocus/commit/e8d88d3e250e541c6daf99d6ef734e8dc3cfdc7f#diff-10dc192c63e06e71b0e1ce5dde139f3dd7ced4f22ec1ad6b33a736b57c90b483R737) by @lllyasviel I don't see...

With #3262, the current code will free memory between every image generation on ROCm - which is what's **already happening** on CUDA. A more ideal behaviour would be to have...

@mashb1t Can we merge this now? We have 1 other person corroborating that my PR makes things better, not worse.

Hi, I am not a distributed-haskell / Cloud Haskell user, but I found this package when looking around for anything that could effectively perform automatic defunctionalisation, since I want to...

Need @lllyasviel to comment as I have no idea what the original comment is referring to regarding "worse". Works fine over here.