Follow up of #665: respond to more run states
@sidhujag added: according to the spec there are other states we can handle in wait_for_run function, so I added those. added termination msg param. Pass through kwargs to super() register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode.
I added test cases
Why are these changes needed?
Related issue number
#665
Checks
- [ ] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR.
- [ ] I've made sure all auto checks have passed.
@IANTHEREAL I created a new PR with @sidhujag changes + test cases to speed us up.
@IANTHEREAL I created a new PR with @sidhujag changes + test cases to speed us up.
Ok, Let's speed up.
Codecov Report
Attention: 57 lines in your changes are missing coverage. Please review.
Comparison is base (
083f522) 26.63% compared to head (2b05fdc) 37.71%.
| Files | Patch % | Lines |
|---|---|---|
| autogen/agentchat/contrib/gpt_assistant_agent.py | 0.00% | 57 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #899 +/- ##
===========================================
+ Coverage 26.63% 37.71% +11.08%
===========================================
Files 28 28
Lines 3777 3802 +25
Branches 858 906 +48
===========================================
+ Hits 1006 1434 +428
+ Misses 2700 2242 -458
- Partials 71 126 +55
| Flag | Coverage Δ | |
|---|---|---|
| unittests | 37.66% <0.00%> (+11.08%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@gagb Hey, is this pr ready for review and merge?
@gagb Hey, is this pr ready for review and merge?
Yes just marked it ready for review.
LGTM, and remember to add a method to set
self.cancellation_requested
@IANTHEREAL, actually, I started implementing this and realized a mistake!
When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish.
But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?
LGTM, and remember to add a method to set
self.cancellation_requested@IANTHEREAL, actually, I started implementing this and realized a mistake!
When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish.
But if a run is in-progress, they will have to set the value of
cancellation_requestedasynchronously? So if we are providing a method to setcancellation_requestede.g., sayassistant.request_cancellation()that method ought to be an async function?
Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?
What if another process load the same assistant and request cancellation?
LGTM, and remember to add a method to set
self.cancellation_requested@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of
cancellation_requestedasynchronously? So if we are providing a method to setcancellation_requestede.g., sayassistant.request_cancellation()that method ought to be an async function?Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?
I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant.
In what scenarios would users want to cancel?
- Most likely, the run is always in 'in_progress' or 'queued' status.
- It's less likely that, during the execution of some functions, users think it's no need to continue this round.
My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it?
Please let me know if my thoughts stray.
LGTM, and remember to add a method to set
self.cancellation_requested@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of
cancellation_requestedasynchronously? So if we are providing a method to setcancellation_requestede.g., sayassistant.request_cancellation()that method ought to be an async function?Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?
I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant.
In what scenarios would users want to cancel?
- Most likely, the run is always in 'in_progress' or 'queued' status.
- It's less likely that, during the execution of some functions, users think it's no need to continue this round.
My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it?
Please let me know if my thoughts stray.
@IANTHEREAL, How about we defer figuring out the cancellation to a different issue? And at least proceed with merging the intended functionality of this PR?
What if another process load the same assistant and request cancellation?
Yes, this is feasible if the second process can access the thread_id. It's even easier if that process can access therun_id (otherwise we'd have to make assumptions such as user is requesting to cancel the last run).
The challenge is ensuring the second process can access the thread_id. I am working on a solution. But I suggest separating the cancellation feature (see comment above) from the intended use case of this PR to unblock.
LGTM, and remember to add a method to set
self.cancellation_requested@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of
cancellation_requestedasynchronously? So if we are providing a method to setcancellation_requestede.g., sayassistant.request_cancellation()that method ought to be an async function?Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?
I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant. In what scenarios would users want to cancel?
- Most likely, the run is always in 'in_progress' or 'queued' status.
- It's less likely that, during the execution of some functions, users think it's no need to continue this round.
My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it? Please let me know if my thoughts stray.
@IANTHEREAL, How about we defer figuring out the cancellation to a different issue? And at least proceed with merging the intended functionality of this PR?
Agree to merge it ASAP. Now LGTM
LGTM, and remember to add a method to set
self.cancellation_requested@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of
cancellation_requestedasynchronously? So if we are providing a method to setcancellation_requestede.g., sayassistant.request_cancellation()that method ought to be an async function?Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?
I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant. In what scenarios would users want to cancel?
- Most likely, the run is always in 'in_progress' or 'queued' status.
- It's less likely that, during the execution of some functions, users think it's no need to continue this round.
My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it? Please let me know if my thoughts stray.
@IANTHEREAL, How about we defer figuring out the cancellation to a different issue? And at least proceed with merging the intended functionality of this PR?
Agree to merge it ASAP. Now LGTM
Just removed unnecessary cancellation code. Check and recommend it for merging.
Rest LGTM
@sidhujag could we get a review?
@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.
@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.
I don't understand?
@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.
I don't understand?
There are conversations that are not resolved. And there are conflicts. I'm not sure about the plan for this PR.
closing because of inactivity.