Integrating the model with LM Studio and Llamafile would make it more accessible for users.
Opened issue @ https://github.com/ollama/ollama/issues/2462 and investigating.
Error output bellow
Loading model: model
Traceback (most recent call last):
File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1612, in
main()
File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1593, in main
model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 57, in init
self.model_arch = self._get_model_architecture()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 262, in _get_model_architecture
raise NotImplementedError(f'Architecture "{arch}" not supported!')
NotImplementedError: Architecture "Moondream" not supported!