arkohut
arkohut
请再提供下 `memos config` 的输出吧
> 一样遇到这个问题。在macOS上。 我改了端口也不行。 配置均为默认下。 你好,感谢尝试这个项目。看起来是没有成功下载模型导致的,这里麻烦将 `~/.memos/config.yaml` 中 `embedding.use_modelscope` 设置为 `true` 并重启一下试试看
> 完全关闭代理后,三个进程都正常运行: > > ``` > memos ps > Name Status PID Started At Running For > serve Running 25272 2024-11-20 14:02:01 0:02:20 > watch Running 27932 2024-11-20 14:02:16 0:02:05...
> 完全关闭代理后,三个进程都正常运行: > > ``` > memos ps > Name Status PID Started At Running For > serve Running 25272 2024-11-20 14:02:01 0:02:20 > watch Running 27932 2024-11-20 14:02:16 0:02:05...
> 文件都是有的,db也在增长,但是就是进不去webui... > > 这可咋办啊,这剩下临门一脚比跑了一半还难受啊 在命令行执行 `curl localhost:23323` 看看是否有返回。 如果确认有的话就换个浏览器试试看吧~
>  问题y依旧,我用火狐浏览器调试,输出的这个信息可能有用点 我感觉应该是 fastapi 返回的文件的 content type 出了问题,可否尝试更新: ``` pip install -U fastapi ```
The official model minicpm-v is already there. So I think right now it is supported? But I still get error message "Error: invalid file magic" when I try to create...
BTW using command `ollama create xxxx -f Modelfile -q Q4_K_M` to perform quantization with Ollama will not encounter this issue.
Are you using ollama with the VLM model?
The VLM can be slow. Although you are using a M1 Mac but the performance may still be quite bad comparing Nvidia GPU. I will give you more performance benchmarks...