verl icon indicating copy to clipboard operation
verl copied to clipboard

[RFC] A Fully decoupled and auto-scaled rollout engine using AWS Bedrock AgentCore Runtime

Open luyuzhe111 opened this issue 3 months ago • 1 comments

What does this PR do?

This PR implements a fully decoupled and auto-scaled rollout engine using AWS Bedrock AgentCore Runtime, making veRL highly agnostic to the diverse agentic use cases that often require custom scaffolding, multiple tools, and complex environments.

At a high level, we propose a design where developers run their whole agentic application with whatever customization they desire in a separate container managed by AgentCore on the cloud, instead of in the same environment as veRL on the training cluster. The design is illustrated by the following architectural diagram.

AgentCore integration

The agent application hosted on AgentCore Runtime communicates with veRL in two ways:

  • The agent invokes the proxy address (SGLang Router) in veRL to get response from the model (hosted by multiple vLLM/SGLang servers), just like how it invokes Bedrock/OpenAI/Anthropic API.
  • The agent sends the rollout and reward (implemented by developers) back to veRL for model updates.

Essentially, veRL sends a prompt to the rollout engine powered by AgentCore, and gets back a rollout and corresponding reward. All the rollout process (tool use, environment interaction, etc) happens on the cloud. This means developers don't have to migrate whatever agent application they've built to veRL to start training, while veRL doesn't have to anticipate all kinds of agentic use cases to accommodate in its design.

In addition to simplifying the developer experience and veRL architecture, AgentCore Runtime itself is also a perfect solution for generating rollouts. It will

  • create a separate sandboxed environment for each request, and
  • provide auto scaling so that one can submit a burst of requests without ever managing any infra.

AgentCore Runtime was originally designed as a deployment service for agent applications, and is repurposed in our design to generate rollouts scalably for RL training. We are also happy to learn recently that Cursor Composer training also adopts a similar design per the Ray Summit talk from @srush, where they leveraged Cursor Cloud agent to generate rollout for their large-scale RL training.

We think the solution in this PR can benefit both research projects and production scenarios. Under this paradigm, researchers and developers can focus on building their agentic applications with arbitrary frameworks, tools, and environments, whether for establishing a baseline or creating a deployable solution. Once they have a working agent and are ready for training, all they need to do on the veRL side is to provide a couple more configs (container URI, S3 bucket, etc). Of course they will still need to return the rollout and define the reward in their agent app, but we will release a sample repo with various agent examples soon to demonstrate how straightforward this process is. And when the training is done, the agent can be deployed with the exact harness and setup in the app so there is no mismatch between training and inference stage.

Co-authors of this PR: @luyuzhe111, @lyzustc, @hellodanylo.

Test

Unit tests are implemented in tests/experimental/agentcore_loop/test_basic_agentcore_loop.py. E2E training was tested for GRPO. vLLM was used as the inference engine.

API and Usage Example

Additional config args to the training script for any agent:

actor_rollout_ref.rollout.agentcore.agent_name=xxx \
actor_rollout_ref.rollout.agentcore.subnets='["subnet-xxx"]' \ # for training cluster VPC 
actor_rollout_ref.rollout.agentcore.security_groups='["sg-xxx","sg-xxx"]' \ # for training cluster VPC 
actor_rollout_ref.rollout.agentcore.container_uri=xxx.dkr.ecr.xxx.amazonaws.com/xxx:tag \
actor_rollout_ref.rollout.agentcore.role_arn=xxx \
actor_rollout_ref.rollout.agentcore.s3_bucket=xxx \

We will release concrete training examples for various agentic use cases soon!

Design & Code Changes

We implement the proposed rollout engine by adding a separate AgentCoreLoopManager in verl/experimental/agent_loop/agentcore_loop.py. Almost all code changes reside in this file.

  • AgentCoreLoopManager initializes the inference servers similar to AgentLoopManager and registers them to the SGLang Router.
  • AgentCoreLoopManager passes the SGLang router address and model name to AgentCore Runtime when the container is first deployed, so that the agent knows where to get model response.
  • When the rollout batch arrives, RequestDispatcher in AgentCoreLoopManager will submit all requests to AgentCore Runtime endpoint in an asynchronous manner.
  • Once all the requests have been submitted, RolloutBuffer will poll SQS for rollout completion messages and download rollouts from S3 once they are done. Saving the rollout to S3 and notifying SQS will be done on the agent app side from AgentCore. We will be open sourcing a wrapper for agent apps soon and demonstrate that developers won't have to worry about these services at all.
  • When all rollouts have been collected or a time limit has been exceeded, AgentCoreLoopManager will return the available rollouts and terminate all sessions. The current design follows the synchronous RL paradigm but we plan to extend to async RL in the near future as AgentCore Runtime is naturally compatible.

Checklist Before Submitting

luyuzhe111 avatar Nov 20 '25 22:11 luyuzhe111

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Nov 20 '25 22:11 CLAassistant