Michael Engel
Michael Engel
> [@olliewalsh](https://github.com/olliewalsh) [@engelmi](https://github.com/engelmi) Where are we on this support? Do we currently support mounting safetensors into containers now? The model store allows to manage arbitrary number of files - which...
> [@engelmi](https://github.com/engelmi) what is the current state of this issue? @rhatdan I think these PRs: - https://github.com/containers/ramalama/pull/1976 - https://github.com/containers/ramalama/pull/2009 (re-)added support to run safetensors with ramalama, provided the runtime supports...
Yes, Q4_K_M could probably be used as a default. Created a PR for this here: https://github.com/containers/ramalama/pull/2050 If you want to tweak for certain characteristics, then its probably best to search...
One possible solution could be to extend `llama-run` so that it implicitly searches for a `.ollama.template` and applies it via `common_chat_apply_template` - or directly downloads and saves it together with...
I think this has been implemented. Closing.
Yes, this broke while refactoring the parts to mount all model files into the container - since converted oci models don't have a reffile, it fails (previously it was reusing...
#1802 Should have fixed this issue, so I am closing it. @dwrobel Please reopen if the issue persists.
> $ ramalama version > ramalama version 0.11.2 @dwrobel It seems you are using v0.11.2 of ramalama. The fix, PR #1802, is not yet included in any release. I think...
You are right @rhatdan It is fixed in the main branch. Closing again. @dwrobel Please ping here or create a new issue.
@bentito Do you mean [these jinja built-in functions](https://jinja.palletsprojects.com/en/stable/templates/#list-of-builtin-tests), for example? Could you provide an example template? In #917 support to use the respective chat template from the model (e.g. extracted...