kha84

Results 59 comments of kha84

Yeah, I was thinking about that placement as well, thanks! Answering your "why" question: 1) having this crucial info hidden somewhere in logs doesn't seem to be very handy. In...

There are so many good pull requests here for ollama hanging for some long weeks and even months (this one included). Looks like they have been stacking up for quite...

Is it just one off thing? Have you tried to restart it? To me it looks like you might have some network issues. Or ollama "model registry" might have them....

By any chance aren't you behind any PROXY or VPN?

@wgong If your ollama is running as systemd service you'll need to inject those variables into unit file. If you're starting it manually with "ollama serve" then yes - whatever...

To be honest, I don't mind ollama to keep using env variables as it's main source of configuration - if that was a decision made. Just to have all of...

Just a few greps: ``` $ grep -r EnvironmentVar ./* ./cmd/cmd.go:type EnvironmentVar struct { ./cmd/cmd.go:func appendEnvDocs(cmd *cobra.Command, envs []EnvironmentVar) { ./cmd/cmd.go: ollamaHostEnv := EnvironmentVar{"OLLAMA_HOST", "The host:port or base URL of...

Anyways, I'm happy to help and prepare a pull request to a documentation, if you don't mind

Have you checked, does llama.cpp support the integrated GPU in the recent AMD APUs? I was wondering about such support like a year ago but it wasn't there yet. If...

Well it seems that llama.cpp supports that https://github.com/ggerganov/llama.cpp/pull/4449