WladBlank
WladBlank
You may have to rewrite a couple of prompts for this. Try increasing the context length in your llm to at least 17k. The code for doing requests is faulty...
> Echoing the sentiment from #2644. > > The proposal to use "denylist" and "allowlist" instead of "blacklist" and "whitelist" was made to encourage more inclusive language in the codebase....
This issue persists even without using lora, when generating img2img. When I start sdui it loads sd_XL_1.0 as an example but it keeps getting """NansException: A tensor with all NaNs...
I have a similar problem, I run text gen webui API and it just tells me that openai doesnt have Mistral-7B as a model. When I add text-gen-webui as provider...
I am not sure if this issue is related but since this exact update my models started returning gibberish after a certain amount of tokens, which was not present before...
@charltonh probably another case of coding and reviewing without actually testing or using the product the devs code on. Could be evaded by more unit tests tbh. but noone likes...
I bet this was not in the field of priority yet. May be added, I can try to write that change but I will only have time in 5-6 days...
I am currently working on a similar solution. There are very small LLM models that can run locally, which do very simple tasks. Not sure if they have any use...
> ### Version > VisualStudio Code extension > > ### Operating System > Windows 10 > > ### What happened? > I found a potential fix for the context length...
I also added this here, works like a charm to avoid gibberish: ` data["truncation_length"] = MAX_GPT_MODEL_TOKENS * 2` Although it may make the output sometimes not as accurate as GPT-4...