AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

Fix ai_name not passed to Agent

Open tmalahie opened this issue 1 year ago • 9 comments

Background

The ai_name parameter passed to the Agent constructor is always an empty string.

It leads to confusing logs like this:

Enter 'n' to exit program, or enter feedback for ...

instead of:

Enter 'n' to exit program, or enter feedback for TSCGPT...

Changes

I noticed that the ai_name is present in ai_config returned by construct_main_ai_config() function in main.py. I just made sure to pass the parameter back to ai_name.

Documentation

It's just a 2-lines fix, I don't think it needs to be documented, but if you think otherwise please tell me in a comment of the PR :)

Test Plan

Reproduction steps are quite straightforward, just enter any prompt and wait for the "NEXT ACTION" command Before the fix you'll see

Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands'n' to exit program, or enter feedback for ...

After the fix you'll see

Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands'n' to exit program, or enter feedback for <Name of the agent>...

PR Quality Checklist

  • [x] My pull request is atomic and focuses on a single change.
  • [x] I have thoroughly tested my changes with multiple different prompts.
  • [x] I have considered potential risks and mitigations for my changes.
  • [x] I have documented my changes clearly and comprehensively.
  • [x] I have not snuck in any "extra" small tweaks changes

tmalahie avatar May 07 '23 11:05 tmalahie

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 10, 2023 5:33am

vercel[bot] avatar May 07 '23 11:05 vercel[bot]

Note: my fix is probably not the cleanest solution, I noticed that in agent.py we sometimes use ai_name and sometimes config.ai_name. Is there a reason for using alternatively one or the other? If no particular reason it would probably be better to remove the ai_name parameter from the constructor and use config.ai_name everywhere for consistency. I can take care of it if you want :)

tmalahie avatar May 07 '23 11:05 tmalahie

Note: my fix is probably not the cleanest solution, I noticed that in agent.py we sometimes use ai_name and sometimes config.ai_name. Is there a reason for using alternatively one or the other? If no particular reason it would probably be better to remove the ai_name parameter from the constructor and use config.ai_name everywhere for consistency. I can take care of it if you want :)

Without having looked at the code in question, it would make sense for the top-level agent routines to refer to the config values from the yaml file to get the name for the top-leve (outermost) agent, whereas any dynamic (sub) agents would obviously have a different name, which is why dealing with it that way would make sense - lest, all your sub-agents would inherit the same name </pure speculation>

Boostrix avatar May 07 '23 12:05 Boostrix

Without having looked at the code in question, it would make sense for the top-level agent routines to refer to the config values from the yaml file to get the name for the top-leve (outermost) agent, whereas any dynamic (sub) agents would obviously have a different name, which is why dealing with it that way would make sense - lest, all your sub-agents would inherit the same name </pure speculation>

Interesting theory, but I don't see anywhere else in the code where we instantiate Agent (apart in unit tests & integration tests). I don't know how sub-agent are handled, but they don't seem to be subject to the inheritance problem you're talking about.

tmalahie avatar May 07 '23 12:05 tmalahie

take a look at the command layer, you should find something called start_agent (more or less), that should instantiate a new Agent object with the ID/task etc (again, pure speculation / off the top of my head and without having followed the re-arch massacre)

Boostrix avatar May 07 '23 12:05 Boostrix

take a look at the command layer, you should find something called start_agent (more or less), that should instantiate a new Agent object with the ID/task etc (again, pure speculation / off the top of my head and without having followed the re-arch massacre)

It doesn't seem to instantiate an actual agent, it just calls a singleton called agent_manager which handles its agents as basic tuples (task, messages, model) and not actual Agent instances. When you search for "Agent(" in the whole project you only find it in tests and main.py. (also I'm new to the project, what is this re-arch massacre you're referring too? 🤔)

tmalahie avatar May 07 '23 16:05 tmalahie

they basically stopped processing PRs in order to re-architect the project out of the realization that the current/previous architecture did not scale (given the sheer number of PRs). According to dev comments, we should expect the re-arch effort to take between 7-14 days of pure havoc - and when the dust settles, there's probably gonna be pure havoc because we'll see ~400 open PRs that no longer apply cleanly :-)

Boostrix avatar May 07 '23 17:05 Boostrix

I see. Better leave this PR as is for now then :-)

tmalahie avatar May 07 '23 17:05 tmalahie

Codecov Report

Patch coverage has no change and project coverage change: -0.04 :warning:

Comparison is base (517c080) 62.30% compared to head (f7507db) 62.26%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3948      +/-   ##
==========================================
- Coverage   62.30%   62.26%   -0.04%     
==========================================
  Files          73       73              
  Lines        3337     3339       +2     
  Branches      481      482       +1     
==========================================
  Hits         2079     2079              
- Misses       1111     1113       +2     
  Partials      147      147              
Impacted Files Coverage Δ
autogpt/main.py 0.00% <0.00%> (ø)

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.

codecov[bot] avatar May 10 '23 05:05 codecov[bot]

Deployment failed with the following error:

Resource is limited - try again in 2 hours (more than 100, code: "api-deployments-free-per-day").

vercel[bot] avatar May 16 '23 03:05 vercel[bot]

Deployment failed with the following error:

Resource is limited - try again in 55 minutes (more than 100, code: "api-deployments-free-per-day").

vercel[bot] avatar May 16 '23 04:05 vercel[bot]