Badis
Badis
@cmp-nct it's ok I found how to make it work. 1 - Download this https://github.com/xianyi/OpenBLAS/releases/tag/v0.3.22 2 - Put the files on a folder (I'll call it OpenBlasFolder) 3 - On...
@cmp-nct If I had to guess I don't think adding the Intel BLAS lib will change anything, I'm pretty sure it's similar to the github BLAS I found. So same,...
@thomasantony Will the llama.cpp be placed in the "repositories" folder, similar to "GPTQ-for-LLaMa"? If so, that's great as updating the web-ui will also result in an update of the llama.cpp...
I think that it's working, now when I use IpAdapters (in this example I go for instantID so I get 2 IpAdapters) and I generate images over and over with...
Yeah same. I would love to get the possibility to manipulate the seed for tests, especially for the Lora VS non Lora one. https://github.com/oobabooga/text-generation-webui/issues/332#issuecomment-1475296375
Based on https://github.com/oobabooga/text-generation-webui/issues/332#issuecomment-1478008867 and https://github.com/oobabooga/text-generation-webui/issues/332#issuecomment-1478064078 We know how to add the seed on the repository, someone should make a PR to implement that onto the front-end I think. 1) Go...
> PR is now merged Great job! Seed is an important parameter and I'm happy I can manipulate it now 😄
Hey! I made the Lora work in 4 bits. python server.py --model llama-7b --gptq-bits 4 --cai-chat I changed the lora.py from this package: C:\Users\Utilisateur\anaconda3\envs\textgen\lib\site-packages\peft\tuners\lora.py Here's the modified version (I don't...
> @BadisG I am not sure if this is really working. Here is a test Are you sure this is the right way to do? Tbh I'm not a specialist...
> Lora 100% is supposed to make it deterministic: #419 > > If it is not then the lora isn't working. The presence of Lora does not alter the deterministic...