Kim Hallberg
Kim Hallberg
> Then should i post in ollama-js ?? But i only installed ollama ( npm install ollama) in my project. Not ollamajs.... `ollama/ollama-js` is the repository for the npm package....
There's a PR for it - https://github.com/ollama/ollama/pull/2506, but no update has been done since it was opened in February.
Ollama does support LoRA, add it as an adapter in a Modelfile, read more about it here: https://github.com/ollama/ollama/blob/798b107f19ed832d33a6816f11363b42888aaed3/docs/modelfile.md#adapter
The Ollama registry is: `registry.ollama.ai`. https://github.com/ollama/ollama/blob/c02db93243353855b983db2a1562a02b57e66db1/types/model/name.go#L38-L42
Maybe try the version The Ollama team released a few hours ago - https://ollama.com/library/llama3-chatqa, as part of the [0.1.35 release](https://github.com/ollama/ollama/releases/tag/v0.1.35).
> Can you even upload new models from huggingface to ollama? Yes, you need to log in or create an Ollama account on ollama.com, copy the Ollama public key to...
> How can I upload multiple .gguf files with different quantization levels? You push a model with a tag just like you'd pull a different model with a tag. ```shell...
> Nope, waiting for more interest on this topic Did you catch this discussion? https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B/discussions/5. There seems to be an issue with the `tokanizer_config.json` and the chat template, maybe that...
> You need to do `git lfs install` when downloading large files using git. I have LFS installed, I have cloned weight from HF before. 👍
The Ollama team released a ChatQA version to the registry a few hours ago - https://ollama.com/library/llama3-chatqa, as part of the [0.1.35 release](https://github.com/ollama/ollama/releases/tag/v0.1.35).