feat: add CAMEL abstraction for future support of new API style
CAMEL Abstractions for OpenAI Responses API ā Phase 0 & 1
Summary
This RFC proposes a model-agnostic messaging and response abstraction that enables CAMEL to support the OpenAI Responses API while preserving full backward compatibility with the existing Chat Completions plumbing. (issue #3028)
Phase 0 catalogs the current dependency surface. Phase 1 introduces new abstractions and a Chat Completions adapter, delivering a pure refactor with zero functional differences.
Motivation
The codebase directly consumes ChatCompletionMessageParam as request
messages and expects ChatCompletion responses (e.g., in ChatAgent).
The OpenAI Responses API uses segmented inputs and a Response object with
different streaming and parsing semantics. A direct swap would break agents,
memories, token counting, and tool handling.
We therefore introduce CAMEL-native types that can be adapted both to legacy Chat Completions and to Responses, enabling a staged migration.
Goals
- Keep all existing entry points on Chat Completions in Phase 1.
- Provide model-agnostic
CamelMessageandCamelModelResponsetypes. - Add an adapter to map Chat Completions <-> CAMEL abstractions.
- Preserve all behaviours and tests (no functional diffs).
- Lay groundwork for Phase 2/3 (backend and agent migration, then Responses backend).
Non-Goals (Phase 1)
- No migration of backends or agents to emit/consume the new types by default.
- No implementation of Responses streaming, structured parsing, or reasoning traces.
Design
New Modules
-
camel/core/messages.pyCamelContentPartā minimal content fragment (type:text|image_url).CamelMessageā model-agnostic message with role, content parts, optional name/tool_call_id.- Converters:
openai_messages_to_camel(List[OpenAIMessage]) -> List[CamelMessage]camel_messages_to_openai(List[CamelMessage]) -> List[OpenAIMessage]
-
camel/responses/model_response.pyCamelToolCallā normalized tool call (id, name, args).CamelUsageā normalized usage withrawattached.CamelModelResponseā id, model, created,output_messages,tool_call_requests,finish_reasons,usage, andraw(provider response).
-
camel/responses/adapters/chat_completions.pyadapt_chat_to_camel_response(ChatCompletion) -> CamelModelResponse.- Future hooks for streaming/structured parsing (not implemented in Phase 1).
Type Relaxation
camel/agents/_types.py:ModelResponse.response is relaxed to Any to decouple
agent plumbing from provider schemas. Existing tests pass MagicMock here, and
the change avoids tight coupling when adapters are introduced.
Compatibility
- Phase 1 preserves behaviour: agents still receive
ChatCompletionfrom the model backend; the adapter is exercised via unit tests and can be opted into in later phases. - No changes to
BaseMessageor memory/token APIs in this phase.
Testing
test/responses/test_chat_adapter.pybuilds a minimalChatCompletionviaconstruct()and validates:- Text content mapping to
CamelModelResponse.output_messages. - Tool call mapping to
CamelToolCall. - Finish reasons and raw attachment.
- Text content mapping to
Alternatives Considered
- Migrating agents directly to Responses in one step ā rejected due to scope and risk; the adapter path enables incremental, testable rollout.
Rollout Plan
- Phase 0 (this RFC): agreement on types, locations, adapter surface.
- Phase 1 (this PR): land abstractions, Chat adapter, unit tests, type relaxation.
- Phase 2: retrofit OpenAI backends and agents to consume/emit CAMEL types,
adjust streaming/tool-calls to operate over
CamelModelResponse, and migrate token counting to work from abstract messages. - Phase 3: add
OpenAIResponsesModelimplementingclient.responses.{create,parse,stream}with converters fromCamelMessagesegments and back intoCamelModelResponse.
Future Work
- Extend
CamelContentPartto include audio/video and tool fragments. - Introduce unified streaming interfaces and structured parsing adapters.
- Reasoning trace capture and parallel tool call normalization for Responses.
[!IMPORTANT]
Review skipped
Auto reviews are disabled on this repository.
Please check the settings in the CodeRabbit UI or the
.coderabbit.yamlfile in this repository. To trigger a single review, invoke the@coderabbitai reviewcommand.You can disable this status message by setting the
reviews.review_statustofalsein the CodeRabbit configuration file.
[!NOTE]
Other AI code review bot(s) detected
CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.
⨠Finishing touches
š§Ŗ Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
- [ ] Commit unit tests in branch
response-api-phase-0-1
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
Hi guys, please have some suggestions or comments on the design of the adapter CAMEL layer for compatible with current ChatCompletion style and future extensions
thanks @MuggleJinx for the RFC,
Currently in CAMEL all our messages are standardized and processed as ChatCompletion. Given this, our existing ChatCompletion format already seems to serve the function of the CamelMessage you're proposing. Is it necessary to introduce this new CamelMessage layer?
If there are interface alignment challenges with directly adapting the Response object, wouldn't the most straightforward approach be to add a conversion layer within OpenAIModel? This layer could simply transform the information from the Response interface back into our existing ChatCompletion format
thanks @MuggleJinx for the RFC,
Currently in CAMEL all our messages are standardized and processed as
ChatCompletion. Given this, our existingChatCompletionformat already seems to serve the function of theCamelMessageyou're proposing. Is it necessary to introduce this newCamelMessagelayer?If there are interface alignment challenges with directly adapting the
Responseobject, wouldn't the most straightforward approach be to add a conversion layer withinOpenAIModel? This layer could simply transform the information from theResponseinterface back into our existing ChatCompletion format
also agree with we can get response and directly transfer to ChatAgentResponse
Hi @Wendong-Fan and @fengju0213. Sorry for the late reply. For add a conversion layer in OpenAIModel, the problem is the response API is much richer than ChatCompletion, so the information will be lost if we transfer from responseAPI style to ChatCompletion. Then it makes less sense to implement it in the first place. So I guess we need to adapt to it globally. Then I think it's pretty necessary to add a new layer i.e., CamelMessage, to manage the complexity inside the class.
So I will continue this way, and try to extend it without breaking the backward compatibility.
Hi @Wendong-Fan and @fengju0213. Sorry for the late reply. For add a conversion layer in OpenAIModel, the problem is the response API is much richer than ChatCompletion, so the information will be lost if we transfer from responseAPI style to ChatCompletion. Then it makes less sense to implement it in the first place. So I guess we need to adapt to it globally. Then I think it's pretty necessary to add a new layer i.e., CamelMessage, to manage the complexity inside the class.
So I will continue this way, and try to extend it without breaking the backward compatibility.
No worries! Maybe we donāt need to convert to ChatCompletion. In ChatAgent, it already returns a ChatAgentResponse, so maybe we can just extend ChatAgentResponse and support response api. Does that make sense?
Hi @Wendong-Fan and @fengju0213. Sorry for the late reply. For add a conversion layer in OpenAIModel, the problem is the response API is much richer than ChatCompletion, so the information will be lost if we transfer from responseAPI style to ChatCompletion. Then it makes less sense to implement it in the first place. So I guess we need to adapt to it globally. Then I think it's pretty necessary to add a new layer i.e., CamelMessage, to manage the complexity inside the class. So I will continue this way, and try to extend it without breaking the backward compatibility.
No worries! Maybe we donāt need to convert to ChatCompletion. In ChatAgent, it already returns a ChatAgentResponse, so maybe we can just extend ChatAgentResponse and support response api. Does that make sense?
Hi Tao, no need to touch ChatAgentResponse for now. The agent API stays the same. Just need to update OpenAI model's API, that's it.
I have make some updates, maybe you can review it for now if you have time? @Wendong-Fan @fengju0213
I have make some updates, maybe you can review it for now if you have time? @Wendong-Fan @fengju0213
sureļ¼will review asap @MuggleJinx
Thanks @hesamsheikh for the review, I have added the missing function, and remove the AI generated comments!
aside from some minor issues i pointed out in the comments, everything else lgtm. btw, have you thought about @fengju0213 suggestion of unifying CamelModelResponse and ModelResponse?
Thanks @hesamsheikh ! On unifying CamelModelResponse and ModelResponse: Iād prefer to keep them separate for nowāCamelModelResponse is the providerāagnostic DTO from adapters, while ModelResponse is the agent-facing wrapper with extra session bookkeeping. Merging them would either leak provider fields into agents or force adapters to backfill agent metadata. Once we finish rolling adapters through the stack we can revisit a single type, but in this PR Iād like to keep the boundary clear.