T. Xu

Results 29 comments of T. Xu

Crap, you are correct about the overlapping windows. Random Forest does not over-fit but the accuracy lows down to 72%.

Hi @meet19, Sorry I did not get back to you earlier. Those files belongs to DEAP dataset, which is not a public dataset. You are supposed to request a license...

This problem is elaborated in issue: https://github.com/AntixK/PyTorch-VAE/issues/34.

> I also found this change very suspicious. In the original paper Eq 14, we have: ![Capture](https://user-images.githubusercontent.com/22267548/158332039-93bacb26-4311-41e4-98f5-25e0656e73f4.PNG) this obviously requires the grad w to be detached. or else the grad...

> Besides, as the original paper said, "Vanilla VAE separated out the KL divergence in the bound in order to achieve a simpler and lower-variance update. Unfortunately, no analogous trick...

Kindly refers to PR: https://github.com/AntixK/PyTorch-VAE/pull/53

Some other samples using the converted model with diffusers: * Samples from LSUN cat model ```python from diffusers import DiffusionPipeline generator = DiffusionPipeline.from_pretrained("xutongda/adm_lsun_cat_256x256").to("cuda") image = generator().images[0] image.save("generated_image.png") ``` ![generated_image_cat](https://github.com/huggingface/diffusers/assets/22267548/c4c7ae8d-7666-4931-b1fd-8a1d721418af) *...

> Thanks very much for your work on this. > > I agree that ADM is still very much used by the academic community but probably doesn't have a lot...

> > The problem here is that all the offical pre-trained ADM by openai use legacy attention, so I really have no choice but using them. > > Can we...

> See my comment here https://github.com/huggingface/diffusers/pull/6730/files#r1468858192 In fact, the part you refer to is about model conversion only, and I have already done it by calling the code of https://github.com/tongdaxu/diffusers/blob/main/scripts/convert_consistency_to_diffusers.py#L143....