autogen icon indicating copy to clipboard operation
autogen copied to clipboard

Follow up of #665: respond to more run states

Open gagb opened this issue 2 years ago • 18 comments

@sidhujag added: according to the spec there are other states we can handle in wait_for_run function, so I added those. added termination msg param. Pass through kwargs to super() register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode.

I added test cases

Why are these changes needed?

Related issue number

#665

Checks

  • [ ] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
  • [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR.
  • [ ] I've made sure all auto checks have passed.

gagb avatar Dec 06 '23 21:12 gagb

@IANTHEREAL I created a new PR with @sidhujag changes + test cases to speed us up.

gagb avatar Dec 06 '23 21:12 gagb

@IANTHEREAL I created a new PR with @sidhujag changes + test cases to speed us up.

Ok, Let's speed up.

IANTHEREAL avatar Dec 07 '23 01:12 IANTHEREAL

Codecov Report

Attention: 57 lines in your changes are missing coverage. Please review.

Comparison is base (083f522) 26.63% compared to head (2b05fdc) 37.71%.

Files Patch % Lines
autogen/agentchat/contrib/gpt_assistant_agent.py 0.00% 57 Missing :warning:
Additional details and impacted files
@@             Coverage Diff             @@
##             main     #899       +/-   ##
===========================================
+ Coverage   26.63%   37.71%   +11.08%     
===========================================
  Files          28       28               
  Lines        3777     3802       +25     
  Branches      858      906       +48     
===========================================
+ Hits         1006     1434      +428     
+ Misses       2700     2242      -458     
- Partials       71      126       +55     
Flag Coverage Δ
unittests 37.66% <0.00%> (+11.08%) :arrow_up:

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov-commenter avatar Dec 08 '23 07:12 codecov-commenter

@gagb Hey, is this pr ready for review and merge?

IANTHEREAL avatar Dec 08 '23 13:12 IANTHEREAL

@gagb Hey, is this pr ready for review and merge?

Yes just marked it ready for review.

gagb avatar Dec 09 '23 22:12 gagb

LGTM, and remember to add a method to set self.cancellation_requested

@IANTHEREAL, actually, I started implementing this and realized a mistake!

When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish.

But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?

gagb avatar Dec 11 '23 02:12 gagb

LGTM, and remember to add a method to set self.cancellation_requested

@IANTHEREAL, actually, I started implementing this and realized a mistake!

When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish.

But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?

Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?

gagb avatar Dec 11 '23 02:12 gagb

What if another process load the same assistant and request cancellation?

sonichi avatar Dec 11 '23 04:12 sonichi

LGTM, and remember to add a method to set self.cancellation_requested

@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?

Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?

I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant.

In what scenarios would users want to cancel?

  • Most likely, the run is always in 'in_progress' or 'queued' status.
  • It's less likely that, during the execution of some functions, users think it's no need to continue this round.

My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it?

Please let me know if my thoughts stray.

IANTHEREAL avatar Dec 11 '23 07:12 IANTHEREAL

LGTM, and remember to add a method to set self.cancellation_requested

@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?

Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?

I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant.

In what scenarios would users want to cancel?

  • Most likely, the run is always in 'in_progress' or 'queued' status.
  • It's less likely that, during the execution of some functions, users think it's no need to continue this round.

My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it?

Please let me know if my thoughts stray.

@IANTHEREAL, How about we defer figuring out the cancellation to a different issue? And at least proceed with merging the intended functionality of this PR?

gagb avatar Dec 14 '23 17:12 gagb

What if another process load the same assistant and request cancellation?

Yes, this is feasible if the second process can access the thread_id. It's even easier if that process can access therun_id (otherwise we'd have to make assumptions such as user is requesting to cancel the last run).

The challenge is ensuring the second process can access the thread_id. I am working on a solution. But I suggest separating the cancellation feature (see comment above) from the intended use case of this PR to unblock.

gagb avatar Dec 14 '23 17:12 gagb

LGTM, and remember to add a method to set self.cancellation_requested

@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?

Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?

I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant. In what scenarios would users want to cancel?

  • Most likely, the run is always in 'in_progress' or 'queued' status.
  • It's less likely that, during the execution of some functions, users think it's no need to continue this round.

My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it? Please let me know if my thoughts stray.

@IANTHEREAL, How about we defer figuring out the cancellation to a different issue? And at least proceed with merging the intended functionality of this PR?

Agree to merge it ASAP. Now LGTM

IANTHEREAL avatar Dec 15 '23 00:12 IANTHEREAL

LGTM, and remember to add a method to set self.cancellation_requested

@IANTHEREAL, actually, I started implementing this and realized a mistake! When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish. But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?

Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?

I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant. In what scenarios would users want to cancel?

  • Most likely, the run is always in 'in_progress' or 'queued' status.
  • It's less likely that, during the execution of some functions, users think it's no need to continue this round.

My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it? Please let me know if my thoughts stray.

@IANTHEREAL, How about we defer figuring out the cancellation to a different issue? And at least proceed with merging the intended functionality of this PR?

Agree to merge it ASAP. Now LGTM

Just removed unnecessary cancellation code. Check and recommend it for merging.

gagb avatar Dec 16 '23 20:12 gagb

Rest LGTM

IANTHEREAL avatar Dec 17 '23 07:12 IANTHEREAL

@sidhujag could we get a review?

gagb avatar Dec 17 '23 14:12 gagb

@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.

sonichi avatar Dec 19 '23 13:12 sonichi

@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.

I don't understand?

gagb avatar Jan 10 '24 23:01 gagb

@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.

I don't understand?

There are conversations that are not resolved. And there are conflicts. I'm not sure about the plan for this PR.

sonichi avatar Jan 14 '24 02:01 sonichi

closing because of inactivity.

gagb avatar Mar 25 '24 23:03 gagb