vonhex
vonhex
Thanks for the tip! On Fri, May 10, 2024, 8:19 PM l33tkr3w ***@***.***> wrote: > By default its using llama.cpp as the LLM backend. You can adjust what > model...
 Im now getting two generations regardless of settings, one via batch size and one batch count. If I change it back to the original settings it does batch count...
2} 2023/03/02 16:40:54 Processing imagine #1080998197389959179: test [icture 2023/03/02 16:41:03 Seeds: [3414115480 3414115481 3414115482 3414115483] Subseeds:[1158651966 1158651967 1158651968 1158651969] 2023/03/02 16:41:28 Error responding to interaction: HTTP 404 Not Found, {"message":...