Sebastian Lobentanzer

Results 129 issues of Sebastian Lobentanzer

Retrieving an example node or edge can better inform the LLM how to format the query. - are names capitalised? - distinguish between arbitrary properties Will require more back-and-forth between...

I recently got API access to Claude; integration requires some adjustments in terms of framing the messages. Need to check how far LangChain goes in supporting already, and check https://docs.anthropic.com/claude/docs/configuring-gpt-prompts-for-claude.

Currently, we use ada-002 as the proprietary (OpenAI) embedding algorithm. We could move to one of the new ones, or even offer a choice (text-embedding-small is a lot less expensive):...

As it stands, the LangChain SystemMessages appear to reach open source models suboptimally, leading to worse performance in understanding tasks and formatting outputs.

https://github.com/biocypher/biochatter/blob/2982143010e8e268207fa6dc8af8bb50ecfdd108/biochatter/llm_connect.py#L528

So far we can only get OpenAI models for the `generate_query()` wrapper method. Extend this to all platforms / models.

it acts more like the primary model, also explaining things

At the moment, flexibility of the correcting agent is limited because it is explicitly created and handled inside the primary conversation. Would be easier if it were a separate conversation...

Main problem is that the other classes expect parameters in this method, while XinferenceConversation does not Should probably separate out these methods (setting the API key and simply connecting)