Athrael Soju

Results 56 comments of Athrael Soju

@osseonews use of functions are the main drive behind Generative UI. RAG is not related in this case.

If your model is already quantized, this did the trick for me, using light: ``` docker run -v /E/Projects/llama.cpp/models:/models ghcr.io/ggerganov/llama.cpp:light -m models/7B/llama-2-7b-chat.ggmlv3.q4_0.bin -p "hello" -n 512 ```

> Yes, there seems to be an error on the `onSuccess` callback hook in Contentlayer when running on windows: [contentlayerdev/contentlayer#477](https://github.com/contentlayerdev/contentlayer/issues/477) > > As a workaround, I suggest commenting out the...

Same Issue using Windows 11, but I was never able to get it to work.

Although this is a great idea, it's worth considering that the rest of the app is using KV to manage sessions and chat history. So it would make sense to...

> Because the new SDK uses streamUI > > https://github.com/vercel/ai-chatbot/blob/095550d4dca22dc506cdbda815cab94cfe8fbe74/lib/chat/actions.tsx#L140 > > And old one uses a render > > https://github.com/vercel/ai-chatbot/blob/d5f736128dca6efff963fa3705f728b06f8d7927/lib/chat/actions.tsx#L144 The bug appears when using streamUI, not render. If...

Honestly, with all the bugs combined, you're better off using the vercel-ai-src example

Did you set your env vars? When does this happen?

> Do I need to do anything else? You'd need to share more details when it happens, what OS you're using, if you're behind vpn and so on.