Ronky
Ronky
> > 获取v4版本中的html > > 把这个 HTML 代码贴出来,我试试 v4代码: `asfasdfasf` v4编辑器中效果:  v5中的样子:  v4中对文字使用了font标签
> 不是 font 的原因。而是 font 和 span 嵌套了 请问有何办法解决吗?我来实现也可以的. 我这边v4的数据放入v5很多样式丢失, 我想到的是重写font粘贴进编辑器, 又更好的办法吗?
> In the meanwhile, check out https://github.com/Pelochus/ezrknn-llm i hope use one ollama service to run the deepseek model.
> [@RonkyTang](https://github.com/RonkyTang) see https://github.com/airockchip/rknn-llm Thank you! But how to merge to ollama service? i want only use ollama, because need use other model in ollama.
> Hi [@RonkyTang](https://github.com/RonkyTang), we are working on upgrading ipex-llm ollama into a new version, and these two GLM models could be supported then. Thanks !
Two issues were identified when using the gml-v-2b-gbuf (https://www.modelscope.cn/models/ZhipuAI/glm-edge-v-2b-gguf ) model: 1. Long reasoning time 2. The returned content is all incorrect 3. If using the official version of Ollama,...
> Hi [@RonkyTang](https://github.com/RonkyTang), I have found out the reason, and it will be fixed in tmr's version. Thanks!
Hi @sgwhat , Can you talk about the current progress? thank you
> Hi [@RonkyTang](https://github.com/RonkyTang), we have released the new version of ollama in https://github.com/intel/ipex-llm/releases/tag/v2.3.0-nightly. We have optimized clip model to run on gpu on windows. Hi @sgwhat , thank you for...