Matt Williams

Results 75 comments of Matt Williams

hi @djcopley that looks like a fun integration. Thanks for adding it to our list of community integrations and for being an amazing part of this incredible community.

Wow, that's a great find. Thanks for reporting it.

Hey there, this is really cool. I am not having the issue @pdevine mentioned. The one change I think is needed is to not autocomplete the models on pull. Since...

I don't know if this is a fish vs bash issue. Works great in fish

And llama2-uncensored:latest is on your machine and the adapter file is in the same place where you are running that command from?

Are folks still experiencing this issue? We are now on 0.1.17 so wondering if it has been solved. If not, perhaps we can get a copy of an adapter that...

This model appears to be no longer supported by llamacpp: ``` matt@matt:~/llama.cpp$ python3 convert.py ~/chavinlo_gpt4-x-alpaca Loading model file /home/matt/chavinlo_gpt4-x-alpaca/pytorch_model-00001-of-00006.bin Loading model file /home/matt/chavinlo_gpt4-x-alpaca/pytorch_model-00001-of-00006.bin Loading model file /home/matt/chavinlo_gpt4-x-alpaca/pytorch_model-00002-of-00006.bin Loading model file...

The model gets automatically unloaded after 5 minutes. It sounds like you want it unloaded in less time than that? Or are you saying it's taking longer than 5 minutes?

Hi @gerroon , thanks for submitting the issue. If the models are in the correct location, is it working as expected? Have you tried using the OLLAMA_MODELS environment variable? Take...

Did that solve your issue @gerroon ?