DanielusG
DanielusG
I created a pull request to apply the extension porting, waiting for it to be approved you can use the extension from my repository, the fix is very simple, it...
Glad to have been able to make a small contribution to the community :)
> It might be interesting to see LLama.cpp for local model I manually modified the code (I haven't forked or committed yet) and managed to make it work with llama.cpp,...
I've this problem too, And i found a fix. in server.py add in the first line: `import matplotlib` `matplotlib.use('Agg')` And it work for me :)
I confirm, the problem persists. `flutter doctor`: ``` Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, 3.13.6, on Microsoft Windows [Versione 10.0.22631.2428], locale it-IT)...
I also have the same problem on linux, I have 32GB RAM, and after changing models 2-3 times, I also run out of all swap
> > Have been looking into this, but might not be an issue anymore? On linux, at least on the latest commit and with either PyTorch 1.13 or PyTorch 2.0...
> > > Disable caching of models Settings > Stable Diffusion > Checkpoints to cache in RAM - 0 > > > I find even 16 GB isn't enough when...
> If llama-cpp python bindings have an option to control the visibility of the processing, you can play with that. Shouldn't be that hard to add an .env variable to...