Stephan Janssen

Results 105 comments of Stephan Janssen

ChatHistory is now project related, please try to break it :) https://github.com/user-attachments/assets/3e83d736-a9f9-429c-9c32-c0a71db115ff

All Fixed and new version tagged and deployed to marketplace

Claude Sonnet 3.5 suggestions: --- We'll create extended versions of the Langchain4J classes in the com.devoxx.genie.chatmodel.anthropic package. This approach will allow us to add the new functionality while maintaining compatibility...

There's also a Gemini implementation but ofc implemented in a different way: https://cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-create#create-context-cache-sample-drest

See also https://github.com/devoxx/DevoxxGenieIDEAPlugin/issues/451

Claude Sonnet 3.5 suggestion: To support JLama in your DevoxxGenie IntelliJ plugin, you'll need to make several changes to your existing codebase. Here's a step-by-step guide on how to integrate...

We could also consider to trust all certificates. However many of the model clients are based in the langchain4j package, so they would need to be "extended". https://www.baeldung.com/okhttp-client-trust-all-certificates

These are the new Llama 3.1 models supported by Groq. However no details yet on the pricing Window context of 131,072 tokens llama-3.1-405b-reasoning llama-3.1-70b-versatile llama-3.1-8b-instant llama3-groq-70b-8192-tool-use-preview llama3-groq-8b-8192-tool-use-preview

It gives a NPE on response.content() in InternalOpenAiHelper ``` static Response removeTokenUsage(Response response) { return Response.from(response.content(), null, response.finishReason()); } ```