AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

feat(autogpt_server): Add versioning of agents

Open Swiftyos opened this issue 1 year ago • 5 comments

User description

Background

Changes 🏗️

PR Quality Scorecard ✨

  • [x] Have you used the PR description template?   +2 pts
  • [ ] Is your pull request atomic, focusing on a single change?   +5 pts
  • [ ] Have you linked the GitHub issue(s) that this PR addresses?   +5 pts
  • [ ] Have you documented your changes clearly and comprehensively?   +5 pts
  • [ ] Have you changed or added a feature?   -4 pts
    • [ ] Have you added/updated corresponding documentation?   +4 pts
    • [ ] Have you added/updated corresponding integration tests?   +5 pts
  • [ ] Have you changed the behavior of AutoGPT?   -5 pts
    • [ ] Have you also run agbenchmark to verify that these changes do not regress performance?   +10 pts

PR Type

Enhancement, Tests


Description

  • Added versioning support to various data models including ExecutionResult, Graph, and ExecutionSchedule.
  • Implemented functions for handling graph versioning, including creation, updating, and history retrieval.
  • Updated server API to include routes for updating graphs and retrieving graph history.
  • Modified test cases to support and validate versioning features.
  • Updated Prisma schema to include versioning attributes in relevant models.

Changes walkthrough 📝

Relevant files
Enhancement
8 files
execution.py
Add versioning support to execution data models and functions

rnd/autogpt_server/autogpt_server/data/execution.py

  • Added version attribute to ExecutionResult model.
  • Updated from_db method to include version.
  • Modified create_graph_execution function to accept version.
  • Added TODO comment about scoping to graph version.
  • +5/-2     
    graph.py
    Implement versioning and history for graph data models     

    rnd/autogpt_server/autogpt_server/data/graph.py

  • Added version, is_active, and is_template attributes to Graph model.
  • Updated from_db methods to include new attributes.
  • Added functions for graph versioning and history retrieval.
  • Modified create_graph function to handle versioning.
  • +102/-18
    schedule.py
    Add versioning to execution schedule data models                 

    rnd/autogpt_server/autogpt_server/data/schedule.py

  • Added version attribute to ExecutionSchedule model.
  • Updated from_db and add_schedule methods to include version.
  • +3/-0     
    manager.py
    Include versioning in execution manager operations             

    rnd/autogpt_server/autogpt_server/executor/manager.py

  • Updated add_execution method to include version when creating graph
    execution.
  • +3/-2     
    scheduler.py
    Add versioning to execution scheduler operations                 

    rnd/autogpt_server/autogpt_server/executor/scheduler.py

    • Updated add_execution_schedule method to include version.
    +2/-1     
    model.py
    Update server models to support versioning                             

    rnd/autogpt_server/autogpt_server/server/model.py

  • Added version attribute to CreateGraph model.
  • Removed unused methods from Methods enum.
  • +6/-13   
    server.py
    Extend server API to support graph versioning and history

    rnd/autogpt_server/autogpt_server/server/server.py

  • Added API routes for updating graphs and retrieving graph history.
  • Updated various methods to handle graph versioning.
  • +85/-168
    schema.prisma
    Add versioning to Prisma schema models                                     

    rnd/autogpt_server/schema.prisma

  • Added version, is_active, and is_template attributes to AgentGraph
    model.
  • Updated AgentNode, AgentGraphExecution, and
    AgentGraphExecutionSchedule models to include version.
  • +15/-7   
    Tests
    3 files
    test_manager.py
    Update execution manager tests for versioning support       

    rnd/autogpt_server/test/executor/test_manager.py

  • Updated test cases to use graph_id instead of id.
  • Included version in test graph creation and execution.
  • +4/-3     
    test_scheduler.py
    Update execution scheduler tests for versioning support   

    rnd/autogpt_server/test/executor/test_scheduler.py

    • Updated test cases to include version in execution schedules.
    +24/-21 
    test_con_manager.py
    Update connection manager tests for versioning support     

    rnd/autogpt_server/test/server/test_con_manager.py

    • Updated test cases to include version in execution results.
    +2/-0     

    💡 PR-Agent usage: Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Swiftyos avatar Jul 12 '24 10:07 Swiftyos

    Deploy Preview for auto-gpt-docs ready!

    Name Link
    Latest commit 3f5340329685f91da807ac51bb69ece7e7be12ce
    Latest deploy log https://app.netlify.com/sites/auto-gpt-docs/deploys/669102bab430be00080d0218
    Deploy Preview https://deploy-preview-7387--auto-gpt-docs.netlify.app
    Preview on mobile
    Toggle QR Code...

    QR Code

    Use your smartphone camera to open QR code link.

    To edit notification comments on pull requests, go to your Netlify site configuration.

    netlify[bot] avatar Jul 12 '24 10:07 netlify[bot]

    CI Failure Feedback 🧐

    (Checks updated until commit https://github.com/Significant-Gravitas/AutoGPT/commit/3f5340329685f91da807ac51bb69ece7e7be12ce)

    Action: test (3.10, windows)

    Failed stage: Run pytest with coverage [❌]

    Failure summary:

    The action failed due to the following reasons:

  • test_available_blocks in test/block/test_block.py failed because of a ValueError: not enough values
    to unpack (expected 2, got 1).
  • test_agent_execution in test/executor/test_manager.py failed due to a
    pydantic_core._pydantic_core.ValidationError: Input should be a valid string for graph_id.
  • test_agent_schedule in test/executor/test_scheduler.py failed due to a
    pydantic_core._pydantic_core.ValidationError: Input should be a valid string for graph_id.
  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Microsoft Windows Server 2022
    ...
    
    415:  ============================= test session starts =============================
    416:  platform win32 -- Python 3.10.11, pytest-8.2.2, pluggy-1.5.0 -- C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\autogpt-server-Hzddpd5Y-py3.10\Scripts\python.exe
    417:  cachedir: .pytest_cache
    418:  rootdir: D:\a\AutoGPT\AutoGPT\rnd\autogpt_server
    419:  configfile: pyproject.toml
    420:  plugins: anyio-4.4.0, asyncio-0.23.7
    421:  asyncio: mode=auto
    422:  collecting ... collected 17 items
    423:  test/block/test_block.py::test_available_blocks FAILED                   [  5%]
    424:  test/executor/test_manager.py::test_agent_execution FAILED               [ 11%]
    425:  test/executor/test_scheduler.py::test_agent_schedule FAILED              [ 17%]
    ...
    
    432:  test/server/test_ws_api.py::test_websocket_router_subscribe PASSED       [ 58%]
    433:  test/server/test_ws_api.py::test_websocket_router_unsubscribe PASSED     [ 64%]
    434:  test/server/test_ws_api.py::test_websocket_router_invalid_method PASSED  [ 70%]
    435:  test/server/test_ws_api.py::test_handle_subscribe_success PASSED         [ 76%]
    436:  test/server/test_ws_api.py::test_handle_subscribe_missing_data PASSED    [ 82%]
    437:  test/server/test_ws_api.py::test_handle_unsubscribe_success PASSED       [ 88%]
    438:  test/server/test_ws_api.py::test_handle_unsubscribe_missing_data PASSED  [ 94%]
    439:  test/util/test_service.py::test_service_creation PASSED                  [100%]
    440:  ================================== FAILURES ===================================
    ...
    
    459:  prefix = " " * 4 + prefix
    460:  for mock_name, mock_obj in (block.test_mock or {}).items():
    461:  log(f"{prefix} mocking {mock_name}...")
    462:  setattr(block, mock_name, mock_obj)
    463:  for input_data in block.test_input:
    464:  log(f"{prefix} in: {input_data}")
    465:  for output_name, output_data in block.execute(input_data):
    466:  if output_index >= len(block.test_output):
    467:  raise ValueError(f"{prefix} produced output more than expected")
    468:  >               ex_output_name, ex_output_data = block.test_output[output_index]
    469:  E               ValueError: not enough values to unpack (expected 2, got 1)
    470:  test\block\test_block.py:34: ValueError
    ...
    
    487:  test\executor\test_manager.py:38: in create_test_graph
    488:  test_graph = graph.Graph(
    489:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    490:  self = Graph(), graph_id = ''
    491:  data = {'description': 'Test graph', 'graph_id': <built-in function id>, 'links': [Link(source_id='ab12493d-2ddf-45b7-b793-63...4', sink_id='fde4b2b8-61b1-45aa-b4d7-4f090bcfc29d', source_name='output', sink_name='text')], 'name': 'TestGraph', ...}
    492:  def __init__(self, graph_id: str = "", **data: Any):
    493:  data["graph_id"] = id or str(uuid.uuid4())
    494:  >       super().__init__(**data)
    495:  E       pydantic_core._pydantic_core.ValidationError: 1 validation error for Graph
    496:  E       graph_id
    497:  E         Input should be a valid string [type=string_type, input_value=<built-in function id>, input_type=builtin_function_or_method]
    498:  E           For further information visit https://errors.pydantic.dev/2.8/v/string_type
    499:  autogpt_server\data\graph.py:83: ValidationError
    ...
    
    507:  test\executor\test_manager.py:38: in create_test_graph
    508:  test_graph = graph.Graph(
    509:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    510:  self = Graph(), graph_id = ''
    511:  data = {'description': 'Test graph', 'graph_id': <built-in function id>, 'links': [Link(source_id='73e63128-d8e4-4401-9126-d5...d', sink_id='200187cc-a308-428f-a3bf-811cd2f1c736', source_name='output', sink_name='text')], 'name': 'TestGraph', ...}
    512:  def __init__(self, graph_id: str = "", **data: Any):
    513:  data["graph_id"] = id or str(uuid.uuid4())
    514:  >       super().__init__(**data)
    515:  E       pydantic_core._pydantic_core.ValidationError: 1 validation error for Graph
    516:  E       graph_id
    517:  E         Input should be a valid string [type=string_type, input_value=<built-in function id>, input_type=builtin_function_or_method]
    518:  E           For further information visit https://errors.pydantic.dev/2.8/v/string_type
    519:  autogpt_server\data\graph.py:83: ValidationError
    520:  =========================== short test summary info ===========================
    521:  FAILED test/block/test_block.py::test_available_blocks - ValueError: not enough values to unpack (expected 2, got 1)
    522:  FAILED test/executor/test_manager.py::test_agent_execution - pydantic_core._pydantic_core.ValidationError: 1 validation error for Graph
    523:  graph_id
    524:  Input should be a valid string [type=string_type, input_value=<built-in function id>, input_type=builtin_function_or_method]
    525:  For further information visit https://errors.pydantic.dev/2.8/v/string_type
    526:  FAILED test/executor/test_scheduler.py::test_agent_schedule - pydantic_core._pydantic_core.ValidationError: 1 validation error for Graph
    527:  graph_id
    528:  Input should be a valid string [type=string_type, input_value=<built-in function id>, input_type=builtin_function_or_method]
    529:  For further information visit https://errors.pydantic.dev/2.8/v/string_type
    530:  ======================== 3 failed, 14 passed in 8.88s =========================
    531:  ##[error]Process completed with exit code 1.
    
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    qodo-code-review[bot] avatar Jul 12 '24 10:07 qodo-code-review[bot]

    PR Description updated to latest commit (https://github.com/Significant-Gravitas/AutoGPT/commit/e7a8d6fbb519e0c64de6f91222bcce3f33c311a0)

    qodo-code-review[bot] avatar Jul 12 '24 15:07 qodo-code-review[bot]

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Key issues to review

    Code Consistency
    The use of # TODO: Should this be scoped to graph version too? in line 214 suggests that there might be an incomplete implementation or a need for further decision-making regarding version scoping. This should be clarified or implemented before merging.

    Error Handling
    The method from_db in the Graph class does not handle the case where the graph input is None. This could lead to NoneType errors if not checked.

    API Design
    The method update_graph in AgentServer class does not allow specifying which version of the graph to update, potentially leading to unintended updates to the latest version only.

    qodo-code-review[bot] avatar Jul 12 '24 15:07 qodo-code-review[bot]

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Possible bug
    ✅ Add validation for graph existence and version before deactivation
    Suggestion Impact:The commit added a validation step to check if the graph and its version exist before deactivating it, ensuring that the graph exists and the version is correct.

    code diff:

    +    async def set_graph_active_version(
    +        cls, graph_id: str, request_body: SetGraphActiveVersion
    +    ):
    +        new_active_version = request_body.active_graph_version
    +        if not await autogpt_server.data.graph.get_graph(graph_id, new_active_version):
    +            raise HTTPException(
    +                404, f"Graph #{graph_id} v{new_active_version} not found"
    +            )
    +        await autogpt_server.data.graph.set_graph_active_version(
    +            graph_id=graph_id, version=request_body.active_graph_version
    +        )
    

    Consider checking if graph_id and version are valid before attempting to deactivate
    a graph. This ensures that the graph exists and the version is correct, preventing
    potential errors or unintended behavior.

    rnd/autogpt_server/autogpt_server/server/server.py [317]

    -await autogpt_server.data.graph.deactivate_graph(
    -    graph_id=graph.graph_id, version=graph.version
    -)
    +if await autogpt_server.data.graph.get_graph(graph.graph_id, graph.version):
    +    await autogpt_server.data.graph.deactivate_graph(
    +        graph_id=graph.graph_id, version=graph.version
    +    )
    +else:
    +    raise ValueError(f"Graph {graph.graph_id} version {graph.version} not found.")
     
    
    • [ ] Apply this suggestion
    Suggestion importance[1-10]: 9

    Why: This suggestion adds a necessary validation step to ensure that the graph and its version exist before attempting to deactivate it, which can prevent potential errors and unintended behavior. This is a crucial improvement for robustness.

    9
    Prevent attribute errors by checking if the graph object is not None before accessing its properties

    Validate the graph object's presence before accessing its version attribute to
    prevent attribute errors if graph is None.

    rnd/autogpt_server/autogpt_server/executor/manager.py [271]

    -version=graph.version,
    +version=graph.version if graph else None,
     
    
    • [ ] Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: This suggestion prevents potential attribute errors by ensuring that the graph object is not None before accessing its version attribute, which is a good practice to avoid runtime exceptions.

    8
    Robustness
    Add exception handling for retrieving the service client

    Consider handling exceptions for the get_service_client method to ensure robustness
    in case the service client cannot be retrieved.

    rnd/autogpt_server/test/executor/test_scheduler.py [17]

    -scheduler = get_service_client(ExecutionScheduler)
    +try:
    +    scheduler = get_service_client(ExecutionScheduler)
    +except Exception as e:
    +    logger.error(f"Failed to get ExecutionScheduler service client: {str(e)}")
    +    return
     
    
    • [ ] Apply this suggestion
    Suggestion importance[1-10]: 9

    Why: Adding exception handling for the get_service_client method is crucial for robustness, especially in scenarios where the service client might fail to be retrieved. This suggestion addresses a potential runtime issue effectively.

    9
    Best practice
    ✅ Use a transaction for creating and deactivating graphs
    Suggestion Impact:The commit implemented the suggestion by adding a transaction block around the graph creation and deactivation operations to ensure atomicity and consistency.

    code diff:

    +        cls, graph_id: str, graph: autogpt_server.data.graph.Graph
    +    ) -> autogpt_server.data.graph.Graph:
    +        # Sanity check
    +        if graph.id and graph.id != graph_id:
    +            raise HTTPException(400, detail="Graph ID does not match ID in URI")
    +
    +        # Determine new version
    +        existing_versions = await autogpt_server.data.graph.get_graph_all_versions(
    +            graph_id
    +        )
    +        if not existing_versions:
    +            raise HTTPException(400, detail=f"Unknown graph ID '{graph_id}'")
    +        graph.version = max(g.version for g in existing_versions) + 1
    +
             if not graph.is_template:
                 graph.is_active = True
             else:
                 graph.is_active = False
     
    +        # Assign new UUIDs to all nodes and links
             id_map = {node.id: str(uuid.uuid4()) for node in graph.nodes}
    -
             for node in graph.nodes:
                 node.id = id_map[node.id]
    -
             for link in graph.links:
                 link.source_id = id_map[link.source_id]
                 link.sink_id = id_map[link.sink_id]
     
    -        return await autogpt_server.data.graph.create_graph(graph)
    +        new_graph_version = await autogpt_server.data.graph.create_graph(graph)
    +
    +        if new_graph_version.is_active:
    +            # Ensure new version is the only active version
    +            await autogpt_server.data.graph.set_graph_active_version(
    +                graph_id=graph_id, version=new_graph_version.version
    +            )
    +
    +        return new_graph_version
    

    To avoid potential race conditions or inconsistencies, consider using a transaction
    when creating and deactivating graphs. This ensures that both operations are
    completed successfully or none at all.

    rnd/autogpt_server/autogpt_server/server/server.py [315-317]

    -updated_graph = await autogpt_server.data.graph.create_graph(graph)
    -await autogpt_server.data.graph.deactivate_graph(
    -    graph_id=graph.graph_id, version=graph.version
    -)
    +async with autogpt_server.data.db.transaction():
    +    updated_graph = await autogpt_server.data.graph.create_graph(graph)
    +    await autogpt_server.data.graph.deactivate_graph(
    +        graph_id=graph.graph_id, version=graph.version
    +    )
     
    
    • [ ] Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Using a transaction for creating and deactivating graphs ensures atomicity and consistency, which is a best practice to avoid race conditions and inconsistencies. This is an important improvement for data integrity.

    8
    Possible issue
    Ensure atomic increment of graph version

    When updating the graph version, ensure that the version increment is atomic to
    avoid conflicts in concurrent operations. Consider using a database operation to
    atomically increment the version.

    rnd/autogpt_server/autogpt_server/data/graph.py [300]

    -graph.version += 1
    +graph.version = await autogpt_server.data.graph.increment_version(graph.graph_id)
     
    
    • [ ] Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Ensuring that the version increment is atomic is crucial for preventing conflicts in concurrent operations. This suggestion addresses a potential issue that could lead to data inconsistencies.

    8

    qodo-code-review[bot] avatar Jul 12 '24 15:07 qodo-code-review[bot]