hyperbolic-c
hyperbolic-c
when I run the llama3 mnn model ``` (py_llama) st@server03:~/mnn-llm$ ./build/cli_demo ./models/llama3/ model path is ./models/llama3/ ### model name : Llama3_8b The device support i8sdot:0, support fp16:0, support i8mm: 0...
Is it possible to use MPI to distribute the computation over a cluster of machines ? This feature let us deploy larger models than would otherwise fit into RAM on...
As shown in the README, it support the deepseek-chat model. Does it support deepseek-coder ? Thanks for your meaningful work!
Is it possible to add server deploy mode like [llama.cpp server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md)? Thanks for your work! !
@b4rtaz Hey, thank you for your wonderful work. Could you please offer some details about how to add supported model? For example, how to split the network according to structure...
作者您好,最近用的时候就算开了梯子,也一致是获取失败。估计是Scholarly那边的问题?看了一下那边的issue也有人提。请问您能正常使用吗,感谢。 报错信息 **Exception ConnectError while fetching page: ('[Errno 11001] getaddrinfo failed',)** ``` 2024-03-18 21:19:21,365 - scholarly - INFO - Enabling proxies: http=http://127.0.0.1:10809/ https=http://127.0.0.1:10809/ 2024-03-18 21:19:22,633 - scholarly - INFO -...
Would it be possible to provide wiki or md file on how to add supported models?