jaredmontoya

Results 83 comments of jaredmontoya

> Actually, taking a look again and thinking about it more I've made a realization: you definitely can't upscale this as your integrated graphics card uses your system ram, not...

> Also, you still get this error using the RealESRGAN CLI -- does it actually finish processing? realesrgan-ncnn-vulkan has 0 errors and finishes processing when using x4plus-anime model, and as...

sad news, same errors in the terminal, VkWaitForFences and vkQueueSubmit errors like before, I used similar by resolution and type image to the one I used before. now the output...

also I noticed that now it crashes after cpu load becomes 100% and then rapidly decreases back to what it was before

> I'm guessing then that it is still just related to the fact that you are using integrated graphics... I'm afriad there's nothing else I can do at this point...

> Ollama recently added openai compatibility. https://ollama.com/blog/openai-compatibility > > I gave it a quick test, setting `$OPENAI_API_HOST` to `localhost:11434`. No luck. Not sure what changes are required to make this...

My config that I provided above is not enough to call it "Local LLM Support". Model name gpt-3.5-turbo is still hard coded for ChatGPTRun actions so they are unusable, and...

That's exactly what I thought about when I said that the maintainer probably won't like my idea to change the name of the project and I am fine with any...