Kim Hallberg
Kim Hallberg
Have you tried `convert-hf-to-gguf-update.py`[^1] instead of `convert.py` to see if that works? [^1]: https://github.com/ollama/ollama/blob/main/docs/import.md#convert-the-model
The model already exists, it's under [CodeQwen](https://ollama.com/library/codeqwen:latest) not Qwen.
The ilsp team has already pushed the model to Ollama[^1]. https://ollama.com/ilsp/meltemi-instruct [^1]: https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1-GGUF/discussions/2
> In general is there a any link what is the best way to finetune a model and run it with ollama? Unfortunately, I cannot help with that. I've never...
Should the model architecture be supported by llama.cpp. The backend Ollama uses, then you can follow the [import documentation](https://github.com/ollama/ollama/blob/main/docs/import.md) to make the model run in Ollama. You can push a...
@stnguyen90 I see some older examples are missing from some languages, is it OK to submit PRs for them as well? 🤔
@christyjacob4 conflict resolved. 🙂
What @javivelasco showcased as the "framework", is not something built into Micro. I'm guessing that's an internal package built at Vercel to help them quickly build out their API. Micro...
> Open to contributions 🙏 Uno-reverse huh? 👀 Guess I'm tinkering with this then, feel free to assign me. 👍
> it would be great if paraphrase-multilingual-MiniLM-L12-v2 will be supported One user has uploaded it to the registry already: https://ollama.com/nextfire/paraphrase-multilingual-minilm, haven't tested it so I cannot speak on its performance.