City

Results 125 comments of City

Hmm, looks like the repo with the separate VAE was (re)moved, and the one you link to is in the reference format (which doesn't have any conversion logic in place...

Thanks, updated the link in the readme.

So I did a full re-conversion of the 32B model from the HF safetensor weights locally with the fixes merged, and the metadata during load looks correct: ``` llama_model_loader: -...

Unsure if this would work on AMD/Vulkan, but I just found out by accident that setting the physical and logical batch size super low seemingly fixes it on my Volta...

I don't really have a working version of this repo locally but I think it should be simple enough for me to get it working again lol. I'll preprocess the...

Well, it's not great. The current network seems really only decent for upscaling, probably because the `nn.Upsample` is right at the start, which in this case acts as a downscale,...

>Could you share this model so we can evaluate it? Sure, uploaded it [here](https://huggingface.co/city96/SD-Latent-Upscaler/blob/testing/latent-upscaler-v2.0rc1_SDxl-x0.5.safetensors) - this is the one that matches the original arch mentioned above so it should be...

What does launching without the cuda cli flag or cuda_visible_devices set do? Also, what do you get if you just run this in a standalone console: ```py import torch print(torch.cuda.device_count())...

I think the problem might be that the --cuda-device flag in comfy just sets cuda_visible_devices internally [here](https://github.com/comfyanonymous/ComfyUI/blob/57f330caf91af37dda67c4202bb27cdebb7161d8/main.py#L113). You could try edit it to use `torch.set_default_device` instead I guess. Something like...

I did indeed try it originally, but the model itself seems too limited to reconstruct the flux latent properly. The sd1 and to some extent sdxl latents are fairly low...