vonhex

Results 23 comments of vonhex

Thanks for the tip! On Fri, May 10, 2024, 8:19 PM l33tkr3w ***@***.***> wrote: > By default its using llama.cpp as the LLM backend. You can adjust what > model...

![image](https://user-images.githubusercontent.com/69470840/222590026-ce62eec9-18e9-4753-adda-40367069970d.png) Im now getting two generations regardless of settings, one via batch size and one batch count. If I change it back to the original settings it does batch count...

2} 2023/03/02 16:40:54 Processing imagine #1080998197389959179: test [icture 2023/03/02 16:41:03 Seeds: [3414115480 3414115481 3414115482 3414115483] Subseeds:[1158651966 1158651967 1158651968 1158651969] 2023/03/02 16:41:28 Error responding to interaction: HTTP 404 Not Found, {"message":...