[Proposal] Encourage the use of a single industry-wide standard to describe all tools in 12‑Factor Agents #12FactorTools
[!TIP] tldr²: mapping
--helpoptions of some cli tool? mapping some library API so it could be used as a tool? what waste of everyone's time -- tool use is just an annotated function name with annotated and typed list of params. Once we build a few mappers, tools will be pluggable like usb sticks to all ai frameworks and apps
[!TIP] This proposal is part of a series. Check out the rest here.
First off, thanks for the excellent work on the 12-Factor Agents guide! It provides a really valuable set of principles for building robust and maintainable agentic systems.
I wanted to raise a point for discussion regarding the standardization of tools.
TL;DR
Even if the tools are purely internal, wrapping them in some standard (like MCP) offers significant design advantages:
-
Transport‑agnostic integration
MCP typically uses http/sse or stdio for separate‑process communication, but we can also support direct in‑process Python function calls—providing a zero‑overhead, low‑latency integration without the need for external servers. -
Hot‑swappable deployments
Seamlessly switch between in‑process (inner) and remote (outer) tool implementations without modifying prompts or model code. -
Framework‑agnostic agents
Easily replace or upgrade your agent core (e.g., switching LLM backends or orchestration frameworks) without rewriting tool interfaces. -
Cross‑project reuse
Define tools once within an MCP server and reuse them across multiple applications and teams, avoiding redundant logic. -
Effortless externalization at scale
As your demand grows, migrate tools from local execution to dedicated services or clusters without altering anything else like prompts, client code, or model configurations.
Just adding a cross-link for context: While this issue focuses on the interface/protocol for tools (suggesting MCP standardization), I've also opened Issue #21 which advocates for tools returning structured data rather than pre-formatted strings.
The two ideas are distinct (orthogonal) but highly supplementary. Standardized protocols like MCP handle structured data well, and tools returning structured data are more robust and flexible, fitting well within such a protocol. Both contribute to overall robust tool design.
I'll review this - I have to think on it a bit, because a "one-size-fits-all" way of describing and loading tools may be contrary to the core ethos of 12-factor agents, which is more around "take the flexibility you need to get maximum performance".
I could see us doing something like
All function names should be
intent
but if function or tool_call_name serves you better / gets better results, then you should use that instead!
@dexhorthy Thanks for the feedback! Totally agree flexibility for performance is paramount.
Maybe I wasn't clear enough – the proposal isn't about forcing how tools are built or run (like mandating MCP or specific protocols). It's about standardizing how tools describe their interface to the agent.
Think of it like standardizing the shape of the electrical plug (the description format), not the appliance itself (like the function names to have suffix _intent). This way, any appliance can plug into any standardized socket.
The Goal: A single, consistent format (Standard Y) the agent reads to understand:
tool_name(the identifier)description(what it does)parameters(inputs needed: name, type, description)
How This Enables Flexibility & Simplifies Integration:
Imagine you're building MyAgentFramework. If we have a standard description format (Standard Y), integration becomes much simpler:
- Convert: Any tool source (annotated Python function, OpenAPI spec, etc.) can be converted into the standard format:
# Conceptual: Convert from any source to the standard description standard_description_Y = convert_anything_to_standard_Y(your_tool_source) - Register: Your framework only needs one way to understand tool descriptions:
# Framework only needs to understand the single standard format Y agent = MyAgentFramework() agent.register_tool(standard_description_Y)
This creates a single "socket" for tools. Frameworks don't need custom code for every possible tool definition style (OpenAPI, Python docstrings, etc.), dramatically reducing the M*N integration complexity to M+N.
Crucially, this doesn't sacrifice performance tuning:
You still control the content within the standard description. If a specific tool_name works better with the LLM, you set it during the conversion or afterward:
standard_description_Y = convert_anything_to_standard_Y(your_tool_source)
# Customize content for better LLM performance *within* the standard structure
standard_description_Y["functions"]["calculate"]["name"] = "calculate_intent" # Preferred name
agent.register_tool(standard_description_Y)
So, standardizing the description format actually boosts flexibility by making tools and frameworks interchangeable, while still allowing optimization of the descriptive content itself.
Hope this revised explanation, incorporating the practical example, makes the intent clearer! Happy to discuss further.