frob
frob
It's just a warning, you can ignore it. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) would confirm, but you are probably spilling from VRAM to system RAM.
[Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx).
It's difficult to debug a problem when the code is incomplete. Post the full content of the test script, 10.0.0.0/8 is private IP space.
How are you routing packets from your local machine to the server in 10.0.0.0/8 space?
Your server is in a private IP space and packets (internet connections) cannot be routed to it over the public internet. If the server is not on the same LAN...
```diff --- 7286.py.orig 2024-11-05 02:30:01.765323596 +0100 +++ 7286.py 2024-11-05 02:30:12.158985343 +0100 @@ -1,7 +1,7 @@ import os os.environ["USER_AGENT"] = "MyCustomUserAgent/1.0" os.environ['OLLAMA_API_KEY'] = 'none' -os.environ['OLLAMA_BASE_URL'] = 'http://10.4.(my_server_ip):11434/' +os.environ['OLLAMA_HOST'] = 'http://10.4.(my_server_ip):11434/' from...
Confirm that `pip install cryptography` fixes it for litellm-1.33.4.
Needs support in llama.cpp first.
What did you do to update? Did you restart ollama?
Server logs will show what `OLLAMA_MODELS` is set to, if you add some logs we can check. If it's set to `F:\Users\danda` but the server is writing to `C:\Users\danda` then...