IntelliServer icon indicating copy to clipboard operation
IntelliServer copied to clipboard

Create offline model loader, to server llama model from local repository

Open intelligentnode opened this issue 2 years ago • 0 comments

Add a service to load llama models family and serve them through an API, the official python code to server the models available here:

https://github.com/facebookresearch/llama

Request the models access: https://ai.meta.com/resources/models-and-libraries/llama-downloads/

LLama 2 Onix, which is more suitable to server in node js setup:

https://github.com/microsoft/Llama-2-Onnx

intelligentnode avatar Sep 04 '23 12:09 intelligentnode