Magistr

Results 11 comments of Magistr

"--ckpt path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded" It's already there, check help or shared.py module

To clarify trt=accelerate in voltaml, as I see you a compiling models to tensorRT format, yes correct same outpusts for SD 1.5 2.1 and other merge based on SD 1.5

Yep it takes time to compile, you can set your gpu arch in order to compile necessary bits On Thu, May 9, 2024, 14:03 David ***@***.***> wrote: > The docker...

To build container run git submodule update --init --recursive put models in models dir run docker build -t glados . wait it will take time to compile cuda kernels for...

@bitbyteboom looks like fix is to change onnxruntime to onnxruntime-gpu in requirements.txt

@dnhkng Possible solution is fork requirements.txt and make a version for docker and for mac, and leave default one for linux ie requirements.docker.txt requirements.mac.txt

Any other requsts in order to make that mergable ? Or it now not needed due to win install scriptt

> can `git submodule update --init --recursive` go into the docker? Better not to, it will be hard to cache

> Just tried the Dockerfile build, and although it works, the TTS inference is super slow. `2024-05-13 09:15:26.774802449 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 28 Memcpy nodes are added to the graph torch_jit...