书上的蜗牛

Results 12 comments of 书上的蜗牛

You can run llamafile(GGUF) on your Windows PC or other OS ,It's support CPU infer and provide an OpenAI Compatibility API。 You should modify this file (QAnything/tree/master/qanything_kernel/connector/llm /llm_for_online.py) to your...

是不是可以先运行它的镜像,然后把内容挂载到自己的目录,然后导出来

if not i will be ashame of it

maybe star num is a kpi index in ali,they use for promotion

cp C disc's .cache directory to xxxx path then set env variables HF_HOME=xxxx , you don't need to set your abs path

I saw your blog,very nice jobs!,the prepocess is too long ,the teech low resolution is a big problem, can you show more detail how to solves this cons!

You can run llamafile(GGUF) on your PC ,it's support cpu, and provide an OpenAI Compatibility API Modify this file (QAnything/tree/master/qanything_kernel/connector/llm /llm_for_online.py) to your local llm server endpoint

hi i want to konw, what kind of hardware can run this model, another, the inference speed seem too slow on their website , is there any solution。