Fooocus
Fooocus copied to clipboard
Always offload vram
my graphics card is 4060ti 16G how do I solve this problem
To create a public link, set share=True
in launch()
.
Total VRAM 16380 MB, total RAM 16235 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
E:\Fooocus_win64>pause