kunibald

Results 17 comments of kunibald

cuda-supported docker image works like a charm and fairly quick. but then the 1 out of 10 machines you deploy to crashes with this 'illegal instruction' error. the issue also...

> I have a similar issue in docker on some machines. I'm using `local/llama.cpp:full-cuda` > > After an `strace`, it turned out `/server` couldn't find `libcublas.so.11`. > > However I...

> "I didn't find the script train_nsf_sim_cache_sid_load_pretrain. Where can I find it? Looking forward to your reply."Thank you! it was renamed to `infer/modules/train/train.py` https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/commit/b3d7075ba414823045b0b9b9801dfaa1a80638a0

caching the packages in `WAS_Node_Suite.py` can save some time on startup (on my setup a few secs) ```py # Freeze PIP modules _packages_cached = None def packages(versions=False): global _packages_cached if...

I have to say that i have close to no experience with this kind of math. maybe this has some inspiration (searched heun's method on github) https://github.com/matplotlib/matplotlib/blob/9618fc6322745834dd098cadecf8e05a0917d498/lib/matplotlib/streamplot.py#L514C17-L514C19 https://github.com/openai/shap-e/blob/50131012ee11c9d2617f3886c10f000d3c7a3b43/shap_e/diffusion/k_diffusion.py#L273

Hi, nice work! You might want to try to create a merge request for it into a still maintained fork of coqui-ai: https://github.com/idiap/coqui-ai-TTS I'm not involved with it, just an...

i think adding wan2.1 support will have biggest impact over all other pointers wan > rest

> I have implemented AnimateDiff-Evolved contexts to behave as work units. you mean this is an example of a working work unit? https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/blob/main/animatediff/context.py also could you provide a hint on...

@comfyanonymous hello senior, can we expect this to be merged at some point, if so would there be a rough eta? thank you for your time!

cuda 12.4, torch2.6, python 2.11 sageattention 2 simple wan workflow with 2 gpu runs into this when having sageattention, 1 gpu is fine, so 2 gpu and default attention is...