frob
frob
I built llama.cpp from the same commit (https://github.com/ggml-org/llama.cpp/commit/e1e8e099) that ollama is using, and used the GGUF file from the ollama model: ```console $ for i in {1..5} ; do ./build/bin/llama-embedding...
This appears fixed as of 0.7.0-rc1 ```console $ for i in {1..5} ; do curl -s localhost:11434/api/embed -d '{"model":"hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf","input":["why is the sky blue?"]}' | jq -c '.embeddings[]|.[0:3] + ["..."] +...
``` ollama pull hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf ```
> A workaround would be to create a modelfile to rename it / give it an alias. ``` ollama cp hf.co/nomic-ai/nomic-embed-text-v2-moe-gguf nomic-embed-text-v2 ```
You need to pull it before you can create an alias for it.
Set `OLLAMA_LLM_LIBRARY=cpu` in server environment.
What does the following return: ``` curl https://registry.ollama.ai/v2/library/gemma2/manifests/9b ```
https://github.com/ollama/ollama/issues/7820
Set `OLLAMA_HOST` and restart the app.
Open a CMD window, run the following and post the output to this issue: ```text set | findstr OLLAMA ```