Carl-Robert Linnupuu

Results 58 comments of Carl-Robert Linnupuu
trafficstars

https://github.com/langchain4j/langchain4j/issues/670 related issue. The error `java.lang.IllegalArgumentException: byteCount < 0: -1` can be reproduced by removing the empty newlines from the mocked response: [LocalCallbackServer.java#L111](https://github.com/carlrobertoh/llm-client/blob/master/src/test/java/ee/carlrobert/llm/client/http/LocalCallbackServer.java#L111)

Awesome, thank you! We could add a preset template for it, similar to how others are done: https://github.com/carlrobertoh/CodeGPT/blob/master/src/main/kotlin/ee/carlrobert/codegpt/settings/service/custom/template/CustomServiceTemplate.kt

This is most likely related to how we process the response. We are simply cutting the output until the first newline is found, which obviously won't work in all situations.

Again, I believe this is related to LM Studio. The stream request pretty much fails successfully, meaning the response won't be saved. I brought it up with them but didn't...

Ollama provides an OpenAI-compatible chat completions endpoint, which can be configured in the Custom OpenAI Service configuration panel.

I will release the new stuff sometime early next week.

Hi! Thank you for the feature request. I will definitely consider it to improve the extension.

Sure, it looks good, and contributions are also highly welcome 👍 I couldn't find documentation related to their API tho. I'm not sure if they even provide one.

I have noticed the same. The bug is related to how the response is rendered on the screen. Each time a new message is received, it is converted into HTML...

This will be fixed in the next release. I'm not yet entirely sure why it was added in the first place.