Massimiliano Pippi

Results 162 comments of Massimiliano Pippi

@abdulhuq811 thanks for reporting this! I see the problem, settings `max_connections` looks easy but for my curiousity how would that resolve the stale connection problem?

@mbonet no worries and thanks for the review! Unfortunately this integration has no tests, so I'm blindly updating the code as I see fit, I'll figure out how to add...

@nerdai it looks like the SDK changed but hard to tell how and when, the python code is generated with fern and the repo is private it seems, release notes...

@mbonet I should've fixed the pydantic errors, `OctoAI` class intantiates fine on my branch, can you give it a shot?

@mbonet I managed to get an OctoAI account and I could test the actual calls. I think I fixed both the problems you reported that I was able to reproduce,...

@mbonet I think the latest changes to LlamaIndex would imply your call should be rewritten like this: ```py from llama_index.core.base.llms.types import ChatMessage, MessageRole octoai.chat([ChatMessage(role=MessageRole.USER, content="Who is Paul Graham?")]) ``` the...

@dmvieira do you have an example in pseudo-code of how you would like to see this feature implemented?

Hi @MarkDirksen thanks for the feature request and the offer to help! To make sure we're all on the same page before investing time into the actual implementation, we would...

I'm assuming you're referring to the `completion` tag: ```jinja2 {% completion model="gpt-4o-2024-08-06" %} {% chat role="system" %}You are a helpful assistant{% endchat %} {% chat role="user" %}List 5 important events...

> so currently the prompt cannot be constructed as it doesn't know about `response_format` I guess. Correct, that was pseudo-code just to illustrate a possible user experience. I like this...