Alok Saboo
Alok Saboo
Yeah, looks like llama3 does not work very well. Try other models.
This may be a moot point for many models (e.g., Gemini) as they have a large context window to accommodate. Before implementing any of the above, we should let the...
I am seeing the same....I tried different local models `qwen2.5` and `llama3.2` with the default prompt. I just see the full prompt in the title.
We already have a [fromfilename](https://beets.readthedocs.io/en/v2.0.0/plugins/fromfilename.html) plugin. Does this add any additional features?
@suchintan apologies for commenting on a closed PR, but wondering how do we use this to use local (Ollama) models? Can you provide a simple example...thanks!
Thanks....will keep an eye on that. Hopefully, we can get it soon.
@suchintan I will take a look at this, but I think it is above my pay grade 😜 I would love to help you implement this (testing or docs). Given...
May be out of scope for this PR, but can we include support for local llms (e.g., Ollama).
Any guidance on how to use this with Ollama? I did not find any documentation @zaidmukaddam
So it doesn't close #5356, right? We won't be able to set `num_ctx` using the OpenAI API call 😞