ChatLLM-Web
ChatLLM-Web copied to clipboard
Does it support custom models?
I tried to add the DoctorGPT model, modified the model in /public/lib/vicuna-7b, and also modified the config.json to point the cacheUrl to the local model at http://localhost:3000/lib/WebLLM/vicuna-7b/doctorGPT/.
However, the webpage shows an error:
Consider trying https://github.com/mlc-ai/web-llm-chat and create issues to the main web-llm repo for new model support requests.