philippHorn
philippHorn
The idea sounds good to me 👍 For my use-case it would be nice to have a way to enforce a text response after a certain amount of loops. In...
I had the same error and inspected the `result` where the traceback comes from: ``` result.model_extra Out[2]: {'error': {'message': 'Provider returned error', 'code': 400, 'metadata': {'raw': '{"type":"error","error":{"type":"invalid_request_error","message":"Requests which include `tool_use`...
I looked a bit more. I think the problem is: - Openai allows tool calls to be in the message history, even when the current API call does not include...
@ravishqureshi You're welcome. By the way the issue is not specific to gemini, I have it on claude as well. I'd be curious to understand this a bit. What seems...
Thanks, for now I use this as a workaround: ``` python for attempt in range(MAX_LLM_CALLS): result = asyncio.run(agent.run(task=None)) if any(isinstance(message, TextMessage) for message in result.messages): break else: raise ValueError("Max attempts...
Some feedback after using the approach without `reflect=True`: - Gernerally it works pretty well, at least for my use-case - But here and there I get cases where an agent...
@moonbox3 I adapted the script now to not use autogen. It does happen when using BedrockChatCompletion. I'll post the two full tracebacks I get as well: Using INFERENCE_PROFILE_ID ``` Traceback...