sweetcard

Results 30 comments of sweetcard

> > 可以试试ollama,一条命令即可体验。 > > 你测试的效果如何?我用ollama测试qwen1.5-0.5b,效果极差 0.5b太小了。目前要做到效果好,很难的

估计要超过200G的内存才行。llama.cpp已经支持了,可以通过temp文件的方式,而不是用内存的方式进行量化了。

> 我们什么时候可以用上呢? 去下载自己量化就可以了

> > 去下载自己量化就可以了 > > 哦?这个是可以兼容的吗?我马上试试 直接用llama.cpp量化就OK了

> > > > 去下载自己量化就可以了 > > > > > > > > > 哦?这个是可以兼容的吗?我马上试试 > > > > > > 直接用llama.cpp量化就OK了 > > 请教:用llama.cpp量化时报错: > > Traceback (most recent...

> ```python > verbose = False > ``` remove this line : `verbose = False` It works again. 😄

check your protoc version. I found two versions installed in my m3. The new version should be placed before the old one in PATH. Use the following command to find...

> it turely have two version protoc in my pc > > ``` > ╰─ which -a protoc ─╯ > /Users/kevin/anaconda3/bin/protoc > /opt/homebrew/bin/protoc > ``` > > and i rename...

> > > it turely have two version protoc in my pc > > > ``` > > > ╰─ which -a protoc ─╯ > > > /Users/kevin/anaconda3/bin/protoc > >...

> finally , it works ~ thank you so much for your kindly help! @sweetcard > > ========================= build scripts as blow: > > 1. install build dependencies > >...