rabidcopy

Results 91 comments of rabidcopy

Don't think this issue belongs here. AUTOMATIC1111 does not maintain the DB extension and this issue was made prior to new commits being made after a 2-3 day lapse so...

This is a bit late but the problem lies with the user not having all the gst-plugins installed and the the AppImage not containing them either, nor them being a...

https://github.com/ggerganov/llama.cpp/pull/1004 Not merged yet and still a WIP. You will need to use this PR to load models quantized to 2-bit.

Editing line 150 of sd1_clip.py to ` with precision_scope(model_management.get_autocast_device("cuda"), torch.float32):` seems to resolve the autocast issue when on --normalvram and using a checkpoint that is large enough to trigger lowvram...

Definitely interested in this. Interesting that they specifically highlight wanting llama.cpp/ggml support.

If it really is GPT NeoX, this [repo](https://github.com/NolanoOrg/cformers) has conversion, quantization, and support for basic inference for GPT NeoX and other model formats. https://github.com/NolanoOrg/cformers/blob/master/cformers/cpp/converters/convert_gptneox_to_ggml.py https://github.com/NolanoOrg/cformers/blob/master/cformers/cpp/quantize_gptneox.cpp

``` !!! Exception during processing !!! Traceback (most recent call last): File "C:\Users\user\Desktop\comfyui\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\user\Desktop\comfyui\execution.py", line 81, in get_output_data return_values =...

> @rabidcopy Do you mind sharing your workflow and comfyui startup parameters? > > I updated a version to ensure that the indices in sample_tcd are on the cpu. Thanks...

> Infinite generation should now be supported. The current implementation works like this: > > * Keep generating until the context `n_ctx` (i.e. 2048) becomes full > * When full,...