AngelBottomless

Results 137 comments of AngelBottomless

Additional Context : I added try catch phrase and searched what is causing it . so entangledchest is trying to be casted to entangled tank. ` [15:39:17] [Server thread/INFO]: [STDOUT]:...

![image](https://user-images.githubusercontent.com/35677394/214077711-2560b7bc-673c-44a5-a0e8-86bb28854f30.png) For creating hypernetwork, it could be done by `manual_seed` before weight init or `fork_rng` and `seed` method. For dataset shuffle / etc, same thing can happend - it should...

@Miraculix200 it should be always possible to add features (or just patch original source code) via addons, its monkey patching. Rather its being problem of having center or separating as...

https://github.com/aria1th/Hypernetwork-MonkeyPatch-Extension In case if you want as extension, well here it is

for usual cases, default setting should just work. There is no known 'best' parameters for cycle length / or warmup step sizes. But its recommended to use scheduler and generating...

Good to see delighted pull request! but please keep your title clear, because otherwise we won't be able to see what it is, with first glance. **pin_memory** helps **when you...

https://pytorch.org/docs/stable/data.html At default, pin_memory takes same effect as `dataloader.to(device)` (only available in pytorch, not in tensorflow) Optional parameter `pin_memory_device` can specify where to pin, but normally you pin CUDA/device memory,...

Your code is actually not using VAE, well I saw that you didn't do it... also the [code reference](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4680/files/b625a5a13c5bd83ea7e010cb8a431996381b6de0..5d895ba2f1ca4aa3d70a4abceb377b4e16e37cd8#diff-d3503031ef91fb35651a650f994dd8c94d405fe8e690c41817b1d095d66b1c69L313) came from [here](https://huggingface.co/spaces/multimodalart/latentdiffusion/blame/aaee24fcc22ce855366eee53ffe5ada06c0c49ce/latent-diffusion/ldm/modules/distributions/distributions.py) Actually,for proper hacky way, you can just do...

https://github.com/aria1th/Hypernetwork-MonkeyPatch-Extension If someone is interested in it, please see the extension and use 'train_gamma' tab in Train tab.

@JoanaMarieL Thank you for the reply! Sure, the feature would be great. We can think about some asynchronous inference while training is running background. (Also, inference can go slow, so...