Victor Dibia
Victor Dibia
I replied on #394 . One thing to note here is that the work by @ragyabraham is more focused on **streaming completed responses from each agent within an active conversation**,...
@vistaarjuneja, Are you using the updated API? For example, see the documentation here on how to stream both agent response AND LLM response. here https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/agents.html#streaming-messages
The 0.2 architecture is not well suited to this. consider using 0.4. It’s a single line of code to do this https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#streaming-tokens
There might be some nuance here. There seem to be two behaviors that can be realized with termination conditions after `team.run()` is called. - 1. **Single call to `team.run()`**: if...
@afourney , @husseinmozannar , @gagb
Interesting idea .. This sounds like what `team.load_state()` does in #4100. Also, in the above example ```python result = await team.run(task=[TextMessage(...), TextMessage(...)]) ``` Does this populate the context for ALL...
There is a new version of AutoGen, all behaviours are standardized on the updated AutoGen AgentChat api. See relevant documentation below. > See this video for a walkthrough of features...
Closing as stale
@oogetyboogety, The error you are seeing is related to a recent release on autogen-agentchat. We plan a release on autogenstudio within the next two days that will handle this. To...
@oogetyboogety .. released in v0.4.2.1 https://pypi.org/project/autogenstudio/0.4.2.1/ Let me know if you have any issues.