Michael Yang

Results 84 comments of Michael Yang

It's missing a build target for freebsd. See [gpu.go](https://github.com/ollama/ollama/blob/main/gpu/gpu.go) and [gpu_darwin.go](https://github.com/ollama/ollama/blob/main/gpu/gpu_darwin.go)

> read udp 172.25.107.139:59735->172.25.96.1:53 this suggests there's a dns error (port 53). is dns set up correctly?

Closing this since Ollama requires some form of write in order to download and run models

As the other commenters have already mentioned, `--verbose` is probably what you're looking for.

@slychief @jtoy can you confirm this is still an issue? I'm not able to reproduce this with the latest (v0.1.20) ollama. Testing 2 T4 GPUs, I get the following results:...

> I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those...

Can you attach the ollama server logs when loading this model? Also, what platform (system architecture, operating system, accelerator (GPU) if any) are you using?

Recent version of Ollama will takes Modelfile content for create requests so you could do something like this ``` curl -X POST http://127.0.0.1:11434/api/create -d '{ "name": "new-model", "modelfile": "FROM llama2\nPARAMETER...

There are a few other places where os.Setenv is being used in tests: ``` server/envconfig/config_test.go|11 col 2| os.Setenv("OLLAMA_DEBUG", "") server/envconfig/config_test.go|14 col 2| os.Setenv("OLLAMA_DEBUG", "false") server/envconfig/config_test.go|17 col 2| os.Setenv("OLLAMA_DEBUG", "1") server/sched_test.go|24...

Hi can you provide server logs for when you run `ollama list`?