chainlit
chainlit copied to clipboard
Offline local LLM inference
trafficstars
Is your feature request related to a problem? Please describe. I'd like to use a local LLM offline, using e.g. Transformers.js, MediaPipe or WebLLM with Chainlit.
Describe the solution you'd like I'd like Chainlit to support invoking local LLM inference in web browser, like Gradio Lite and Transformers.js.
Describe alternatives you've considered An alternative is to use Gradio Lite or WebLLM chat.