Hansson0728
Hansson0728
The model offloads after 5 min on the api, it would be nice to be able to prevent this
Please please someone who knows go... add so the internal llamacpp encode endpoint is avilable to the ollama api, so we can use the llm tokenizer to measure how much...
i dont even get a response when curl /embeddings. curl -X POST http://localhost:11434/api/embeddings -d '{"model":"nomic-embed-text", "prompt": "hello"}' nothin in the logs no answer no 404 no nothing. iam pretty sure...
Suggestion. When sending a request to the show enpoint without a model name, the response should return the model information for the currently loaded model with a status property stating...
As the title says, it would be nice to have that information so we can filter out embedd models if we want to allow for model switching on a frontend
how can i slow down / remove the animation on the lines, i still want the destination circle animation but not the line animation ?
runnig Pet.remove() will remove sheeps most of the time, but the "black" ones wont be removed