Richard-Weiss
Richard-Weiss
Hey @laogou717, deploying PandoraAI is rather easy with for example Vercel with little configuration. However deploying the API server is something where I failed. I do know that some people...
I actually deployed the API server and Nuxt server now. Deploying to Vercel is extremely easy, as you can just connect it to your forked repo and it's literally one...
@anthonyronda I've implemented using Local LLMs with node-chatgpt-api and the LM Studio inference server. You can take a look at my forks: https://github.com/Richard-Weiss/node-chatgpt-api https://github.com/Richard-Weiss/PandoraAI But maybe it's enough for you...