oatmealm
oatmealm
Tried directly with `olllama` using `gemma`, same error. `debug-on-error` is set but not catching any errors. ``` #s(llm-ollama "http" "localhost" 11434 "gemma:latest" "nomic-embed-text:latest") ```
I have ollama installed locally but I would always want to use the remote. Even with `OPENAI_BASE_URL=http://x.x.x.x:11434/v1`, fabric will only list the local models, though it would call /models on...
Just seen this: ``` cortex pull BAAI/bge-m3 ✔ Dependencies loaded in 274ms ✔ API server is online Downloading model... ✔ Model downloaded ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0% | ETA: 0s | 0/100TypeError: terminated...
Missed the point about GGUF, but seem to be an issue with some GGUF as well: ``` cortex pull pervll/bge-reranker-v2-gemma-Q4_K_M-GGUF ✔ Dependencies loaded in 438ms ✔ API server is online...
I think qa pilot does something similar with either a GitHub repo or a website. Never was able to get it to work and didn't check since. But especially for...
Was going to test. I see it's calling the remote for embeddings, but all calls return 403... not sure what's happening since the server's origin is set and other client...
BTW, localhost, with page_assist exact settings (unchanged) works
I'm pretty sure I did, but anyways. I works now. Before setting origina only embeddings api was failing btw... 
Yes I think more projects should pick fabric as a starting / default option. At best better than nothing :)
Should it navigate there automatically?