Blake Wyatt

Results 222 comments of Blake Wyatt

@aviatiq in that case, there is a problem with Miniconda on your system. This isn't a problem on our end. Open an issue here http://github.com/conda/conda, explain to them that you...

@aviatiq the first part makes sense. It makes sense using another computer would work because the problem is definitely specific to your particular computer and install. If you want to...

@aviatiq the author says this ggml model can only be used with this person's personal fork https://huggingface.co/alpindale/pygmalion-6b-ggml/discussions/1#643bf31921686867003e92be Should find another model which is compatible with LLaMA.cpp

This model will work for example. I've tested it myself https://huggingface.co/Drararara/llama-7b-ggml

@talvasconcelos you could try using the latest version of the one-click installer that I have here which fixes some bugs, but this is the first time I'm seeing this error...

Anyone who is getting this error does not have enough memory to load the model and needs to choose a smaller model. This model https://huggingface.co/wcde/llama-7b-4bit-gr128 works on most GPUs. If...

@Primary-Ad2848 oh your error is different. Windows has a limit to how long you can make a path (i.e. how many folders deep you can have) and it looks like...

@haindmade I don't think you have the path issue. That was a response to @Primary-Ad2848. In your case with 16gb of VRAM, you should definitely be able to run the...

@FieldMarshallVague I'm able to load 13b with 24GB of VRAM using the `--gpu-memory` flag. I append `--gpu-memory 21` and that fixes all of my memory allocation errors without reducing model...