x4080
x4080
@dreamofi , I'm using next js and my next.config.js seems not using custom webpack config so it goes like this ``` const withSass = require('@zeit/next-sass'); const withOffline = require('next-offline') module.exports...
@dreamofi It's simple actually, 'window is not defined' when it rendered on server
Hello, I have the same experience, using localAuth strategy, it still asks QR Code everytime I restart node. Session is automatically saved right ? Do I need to load the...
Hi, is there a way to use custom model for use in web LLM? And what's the limit for m2 pro 16gb Thanks
and no send to img2img etc on my side
hi, using server "" still printed at the end of conversation, and I can't find stop token now in /examples/server/utils.hpp, how to avoid this "" in server ? thanks
@ggerganov Hi, no i was using Phi-3-mini-128k-instruct.Q4_K_M.gguf, forget it, I think this was for server, for non server it already works fine
Hi thanks for your tip, do you know how to make it work for mps ? It seems work but there's some error warning by using torch.compile, if I comment...
@wacky6 thanks
Hi, I think you should also modify file utils.hpp ``` llama_params["stop"].push_back(""); // chatml llama_params["stop"].push_back(""); // gemma llama_params["stop"].push_back(""); // llama 3