Mark Ward

Results 39 comments of Mark Ward

https://github.com/microsoft/WSL/issues/8502#issuecomment-1153301518 if you get Failed to send reload request: No such file or directory that means your udev service is not running. Check with sudo service udev status before reloading,...

I'm getting the same error with the new models. Intel Core i9 14900k DDR5 6400 2x48GB (96GB) Nvidia RTX 4070 TI Super 16GB ``` Apr 18 18:57:54 quorra ollama[1170]: time=2024-04-18T18:57:54.713Z...

It has happened again. This never happened with the previous versions of Ollama. I have a program that will fetch the list of models available, sort the list randomly, execute...

@taozhiyuai > happen when v-ram is not enough to run on GPU+V-RAM, so ollama runs it on CPU+HD Ollama v0.1.32 is running models that have always run on GPU sometimes...

Another run and the model "deepseek-coder:6.7b-instruct" is running on the CPU when it otherwise runs on GPU. ``` Apr 19 15:43:19 quorra ollama[1180]: time=2024-04-19T15:43:19.043Z level=INFO source=routes.go:97 msg="changing loaded model" Apr...

@dhiltgen I have attached a zip of my log file. [ollama_log.zip](https://github.com/ollama/ollama/files/15046721/ollama_log.zip)

When I run `OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log` from the command line, it does not load the modules downloaded when it runs as a service.

As I am testing. On my server, no other process would be using the GPU. It is only Ollama. I will observe the GPU processes and see if there is...

During a test I noticed the GPU process stopped and there are no GPU Ollama processes when it started a new model and it loaded as all CPU [server.log](https://github.com/ollama/ollama/files/15070605/server.log)

@dhiltgen , I could try out your PR. Is it pretty easy to get the source up and runing?