Pedro Cuenca

Results 331 comments of Pedro Cuenca

Hi @Skylion007! Thanks, I've read the changes and they seem reasonable! However, when we made the change to `ruff` last week we made it compatible with the `ruff` configuration in...

Looks like transformers applied this though: https://github.com/huggingface/transformers/pull/21694/files#diff-50c86b7ed8ac2cf95bd48334961bf0530cdc77b5a56f852c5c61b89d735fd711R8

@Skylion007 would you mind syncing with `main` and reformatting?

I've been checking the history a bit and it look like `steps_offset` was formally introduced in the configuration in https://github.com/huggingface/diffusers/pull/479, motivated by https://github.com/huggingface/diffusers/issues/465. In particular, the different scheduling configurations (at...

@Beinsezii Thanks a lot for the tests! I don't have an answer unfortunately, maybe a bug was introduced after those PRs, or maybe the problem was always there from the...

Another idea would be to go back to checking against the reference CompVis codebase, comparing outputs. Our guiding light for integration was to generate 1-to-1 identical results (on CPU using...

Hi @clinty! Thanks for the contribution. However, I'd like to understand what purpose it serves or how you expect this to be used. Unlike in the `transformers` codebase, `main_input_name` is...

Ah, it's set to a default value. In that case I'd support changing it to avoid confusion. I triggered a CI run, copying @sayakpaul as this affects the PyTorch implementation

This could be for a number of reasons, but unfortunately I don't currently have access to TPU v5e instances to test. I'll see if we can get one to verify.