RandoInternetPreson
RandoInternetPreson
Is there something wrong with the img2img api? I can get the txt2img code working here http://127.0.0.1:7860/docs#/default/text2imgapi_sdapi_v1_txt2img_post SethRobinson's post here was very useful: https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/3040#issuecomment-1284746589 I can convert the output Base64...
FYS, I was able to get this to work too!! Wow it really works well at transcribing my voice, better than google on my phone. A few things, I had...
https://huggingface.co/ozcur/alpaca-native-4bit This guy has quantized the alpaca-native model from chavinlo.
Thank you Bralwence, this kind soul on Reddit has solved my issue and provided code: update there are two versions of the change. Mine is the first one and the...
Yeah, that's what I was talking about. I can induce that behavior. If I just load Oobabooga, don't load a character card, and keep the names of person 1 and...
You will get this error if you have a ramdisk or virturaldisk running prior to loading your model. You can set up your ramdisk after you load the model though.
Oh maybe it's something different then the problem I was having 🤷♂️
https://github.com/oobabooga/text-generation-webui/issues/322#issuecomment-1472624995 I had the same problem, you need to change the capitalization of the phrase "LLaMAConfig"
Nope, it's easier than that, go to your model folder where you have your llama model. Find tokenizer_config.json and change LLaMATokenizer to LlamaTokenizer
Yeass!! I was able to get this to work, but I had to remove the "modules.py" and "modules-1.0.0.dist-info" folder from my textgen environment for it to work. I'm 'running on...