Bryan Constantine 汪万丁

Results 9 comments of Bryan Constantine 汪万丁

Hi Diffusers team, I’d like to work on this feature as part of the Diffusers MVP program. The idea is to add a new flag/config to enable_group_offload, e.g. pin_first_last. When...

Hey @sayakpaul I and @Aki-07 have opened a fix for this in PR #12747 Summary: 1. We add optional `pin_groups` argument to enable_group_offloading for model and pipeline level, which expects...

Thank you for the initial comment! We are working on the solutions right now

@sayakpaul On my local device there is still a few failed tests on `tests/models/autoencoders/test_models_autoencoder_kl.py` , regarding safetensors I/O serialization error and a few decimal output difference on test_output_pretrained. However I...

These were the error logs ``` _____________________________________________________ AutoencoderKLTests.test_layerwise_casting_memory _____________________________________________________ self = @require_torch_accelerator @torch.no_grad() def test_layerwise_casting_memory(self): MB_TOLERANCE = 0.2 LEAST_COMPUTE_CAPABILITY = 8.0 def reset_memory_stats(): gc.collect() backend_synchronize(torch_device) backend_empty_cache(torch_device) backend_reset_peak_memory_stats(torch_device) def get_memory_usage(storage_dtype, compute_dtype):...

@sayakpaul Also with the current checks, it looks like there is coding style error. Can you help us run the automatic style correction?

@sayakpaul thankyou for testing! Glad to hear no failures on your environment end.

Hi @DN6 @sayakpaul We’ve updated the fix according to the review. Could you take a quick look and share any feedback when you have a moment? Thank you in advance!

Hey @sayakpaul, I noticed the diff is confusing because the branch history got complicated after earlier force-push/imperfect rebase activity (around #12692) that is done above, and some commits don’t line...