dalai
dalai copied to clipboard
The simplest way to run LLaMA on your local machine
I've been playing around with this repository and dockerized the app. During the process I have refactored a few things. Always respecting the original values to avoid breaking-changes. If you...
Installed Dalai and converted 7B to 4bit, and connected to localhost but how to start llama? I'm entering inputs to the webpage but llama doesn't load. I'm using Windows
Nothing is printed in the console when this happens, but when I use an IP address instead of `localhost`, the submitted prompt disappear and nothing happens. https://user-images.githubusercontent.com/56323389/224729400-59131a85-a4f3-4f59-bd54-c19e34949dc8.mp4
Installed Dalai via "npx dalai llama 65B" to only download the 65B parameter model. This successfully downloaded and quantized. Once this completed I ran "npx dalai serve" to run the...
cant run other models that 7B so i tried to run this function but is not really a function.
With the previous version I successfully downloaded the 65B model and quantized it. With the latest version I get this error with every model but the 7B one: `bash-3.2$ ./quantize...
I would like use this with my own React frontend, unfortunately chrome always complains about CORS error. Could you integrate this in a settings object? https://socket.io/docs/v4/handling-cors/
Hi, what kind of computational requirements is required to run this thing effectively? My system has integrated Intel Iris XE graphics so no Nvidea GPU.
I have made a migration from javascript to typescript maintaining compatibility with current scripts. personally, if the project grows, it will be easier to manage and maintain the code if...
# Description I'm trying to install LLaMa on my Windows 11 machine using Node.js 18.15.0 and npm 9.6.1 installed through Volta. However, when I run the command npx dalai llama,...