arkohut
arkohut
Which version are you using? Try `pip show memos` to get the version.
> > Which version are you using? Try `pip show memos` to get the version. > >  > > 尝试本地安装,报错 Python 3.13 is such a new version. Some dependencies...
Please do not use Python 3.13, most of the dependencies do not support it yet. Use Python 3.12 can solve this.
I think this is the same issue with https://github.com/arkohut/pensieve/issues/17 Reopen it if necessary.
Thanks for the reply. The issue #90 is just talking about customize endpoint. I think it is quite necessary. For example, I need a openai compatible embedding endpoint which is...
> hi @arkohut, would you be available to chat more about this issue? we are doing some research to enable this feature to the users in the best manner. OK,...
> is there any ollama model that works with this? Yes, the official model in ollama `minicpm-v:8b-2.6-q4_K_M` works! But following the instruction in openbmb/minicpm-v I can not build the quantized...
https://github.com/ollama/ollama/issues/6272 这里的方案可以解决这个问题。总结一下: 使用 llama.cpp 做完量化所获得的 `gguf` 文件不知为何会多 8 字节,解决方案有两个: 1. 删除那 8 个字节 2. 直接使用 `ollama create -q xxx` 的命令从 fp16 的 `gguf` 进行量化,而不是使用 `llama.cpp` 进行量化
修改配置后需要 `memos stop && memos start` 重启一下才生效~
发个报错信息看看?