flatsiedatsie
flatsiedatsie
I downloaded the github repo and placed in on a localhost server. I opened the page, and clicked on the "Load GPT2 117Mb" model. I've been waiting for a few...
Very interesting project. Could you share more on the About page? How far is implementation? Does it work with any devices already?
Just curious when the WASM version might be released. I'd love to try that one.
I cloned the github repo and ran `npm i` to install it. This resulted in the following: ``` npm ERR! code ERESOLVE npm ERR! ERESOLVE could not resolve npm ERR!...
Wllama is a browser-based version of Llama.cpp with low-level capabilities, and has a built-in embedding option too. https://github.com/ngxson/wllama While WebLLM only runs on WebGPU-enabled browsers, Wllama can run on all...
I spotted this error when trying to use the web demo in pure browser mode. This is in the Brave browser, with protections disabled. ``` File {name: 'example.pdf', lastModifi..etc {type:...
### Summary My code is literally this inside some boilerplate HTML: ``` import { ink } from 'https://esm.sh/[email protected]'; ink(document.getElementById('editor')!); ``` This results in the following error:
Currently a model can fail to load for a number of different reasons. However, the error raised seems to always be a general "failed to load" error. It would be...
Because I [made a typo](https://github.com/ngxson/wllama/issues/56) in the URL of a local model file I noticed something strange. It seems that invalid URL ended up in the `wllama_cache` anyway. I checked...
In your readme you mention: > Maybe doing a full RAG-in-browser example using tinyllama? I've been looking into a way to allow users to 'chat with their documents'. A popular...