Proposal: Enabling MCP Toolbox for Databases as MCP Server
Prerequisites
- [x] Search the current open issues
What are you trying to do that currently feels hard or impossible?
Overview
Currently, the MCP Toolbox for Databases acts solely as an MCP server, exposing tools to clients (such as IDEs or agent frameworks) via the MCP protocol. To maximize flexibility and composability, we propose extending the Toolbox to also function as an MCP client. This will allow it to dynamically connect to other MCP servers, aggregate their tools, and expose them as a unified toolset to its own clients. This dual role enables federated tool management, cross-organization sharing, and dynamic tool composition.
Suggested Solution(s)
Key Features
1. MCP Client Mode
- Configurable MCP Servers: Administrators can specify a list of remote MCP servers (with endpoints and credentials) that the Toolbox should connect to as a client.
- Dynamic Tool Loading: The Toolbox will periodically (or on-demand) fetch tool definitions from these remote MCP servers and merge them into its own tool registry.
- Conflict Resolution: If tool names or IDs collide, configurable strategies (e.g., namespacing, override, or ignore) can be applied.
Sequence Diagram
Below is a sequence diagram illustrating the interaction when the Toolbox acts as both MCP client and server.
sequenceDiagram
participant Admin
participant Toolbox
participant MCP_A
participant MCP_B
participant Client
%% Admin configures Toolbox as MCP client
Admin->>Toolbox: Configure remote MCP servers (MCP_A, MCP_B)
%% Toolbox fetches tool lists from remote MCP servers
Toolbox->>MCP_A: Fetch tool list
Toolbox->>MCP_B: Fetch tool list
MCP_A-->>Toolbox: Return tool definitions (A1, A2)
MCP_B-->>Toolbox: Return tool definitions (B1, B2)
%% Toolbox merges local and remote tools
Toolbox->>Toolbox: Merge local and remote tools
%% Client requests available tools from Toolbox
Client->>Toolbox: List available tools
Toolbox-->>Client: Return unified tool list (Local, A1, A2, B1, B2)
%% Client invokes a remote tool via Toolbox
Client->>Toolbox: Invoke remote tool (A1)
Toolbox->>MCP_A: Proxy tool invocation (A1)
MCP_A-->>Toolbox: Return result
Toolbox-->>Client: Return result
Example Configuration
# tools.yaml
...
mcp_clients:
- name: remote-mcp-a
endpoint: https://mcp-a.example.com
api_key: "secret"
toolset: "default"
- name: remote-mcp-b
endpoint: https://mcp-b.example.com
api_key: "secret"
toolset: "analytics"
...
Benefits
- Federation: Aggregate tools from multiple MCP servers, enabling cross-team or cross-org collaboration.
- Dynamic Composition: Add or remove tool sources without redeploying clients.
- Centralized Control: Administrators can curate which remote tools are exposed to local clients.
- Scalability: Enables hierarchical or mesh-like MCP topologies for large organizations.
Implementation Considerations
- Caching: Optionally cache remote tool definitions for performance and resilience.
- Tool Invocation Routing: Clearly distinguish between local and remote tool invocations for logging and error handling.
- Security: Ensure secure communication (TLS, authentication) between Toolbox and remote MCP servers.
- Extensibility: Design the MCP client interface to support future protocols or authentication schemes.
I can contribute to this feature, did similar work in https://github.com/GoogleCloudPlatform/kubectl-ai/pull/291
Hi @tuannvm Thanks for submitting this proposal!! :) Let's try to discuss how this might look like~ Here are some initial thought --
- We are thinking of adding the
mcp-serveras a new source instead. For example:
sources:
remote-mcp-a:
kind: mcp-server
endpoint: https://mcp-a.example.com
version: "2024-11-05"
apiKey: "secret"
I see that you proposed to add a new mcp-clients. Is there a specific reason for that design?
-
What do you think about supporting all the different type of transport protocol? (e.g. stdio, http with sse, streamable http). I'm guessing that stdio protocol might be a little tricky (I haven't look deeply into the Go-SDK yet but guessing there might be something that we can utilize to make things simpler), wondering what are your thoughts on this. If supporting multiple transports, we can also have different source
kindsfor each transport protocol (e.g.mcp-server-stdio/mcp-server-httpsse/mcp-server-streamablehttp). -
For conflict resolution, it might make sense to take Toolbox's native tool as precedence for the MVP development of this. The only use case I can think of that needs configurable strategies is if the users would like to have toolsets that uses different tool (one toolset with Toolbox's native tool and another toolset with the 3p server's tool). WDYT?
-
Regarding to tools, should users tell us which tools they want? Maybe either by indicating the exact tool name or providing a regex (e.g. name prefix) for that tool.
thanks @Yuan325 for the response.
To answer your question:
We are thinking of adding the mcp-server as a new source instead. For example:
Yes, in that mode, genai-toolbox has to act as an MCP client; hence my proposal. Is my understanding correct?
Correct me if I'm wrong: genai-toolbox does support mcp server mode already. See: https://github.com/googleapis/genai-toolbox/blob/main/internal/server/mcp.go
What do you think about supporting all the different type of transport protocol?
We should support all available transport protocols to make it as flexible as possible for users. See: https://modelcontextprotocol.io/specification/2025-06-18/basic/transports
For conflict resolution, it might make sense to take Toolbox's native tool as precedence for the MVP development of this. The only use case I can think of that needs configurable strategies is if the users would like to have toolsets that uses different tool (one toolset with Toolbox's native tool and another toolset with the 3p server's tool). WDYT?
I agree. A smaller scope would allow us to move faster, make the feature available earlier, and serve as a foundation for subsequent improvements
Regarding to tools, should users tell us which tools they want? Maybe either by indicating the exact tool name or providing a regex (e.g. name prefix) for that tool.
I would say the MCP tools should take into account both the tool itself and its original type (Postgres, MySQL, etc.).
We are thinking of adding the mcp-server as a new source instead. For example:
Yes, in that mode, genai-toolbox has to act as an MCP client; hence my proposal. Is my understanding correct?
I might have misunderstood from your example configuration.
mcp_clients:
- name: remote-mcp-a
endpoint: https://mcp-a.example.com
api_key: "secret"
toolset: "default"
Instead of creating a new mcp_clients type, we're thinking of including it a new sources with the mcp-server kind.
we're also open to having multiple kind. E.g. mcp-server-stdio / mcp-server-http-sse / mcp-server-shttp.
Correct me if I'm wrong: genai-toolbox does support mcp server mode already. See: https://github.com/googleapis/genai-toolbox/blob/main/internal/server/mcp.go
Yes, genai-toolbox works as a MCP server :) and we support all transport protocol -- stdio / HTTP with SSE / Streamable HTTP (non-sse).
We should support all available transport protocols to make it as flexible as possible for users. See: https://modelcontextprotocol.io/specification/2025-06-18/basic/transports
Agree 👍 If you have an idea already, how are you thinking to implementing stdio transport (Toolbox <-> 3P server) ? And in cases when there are more than one 3P server communicating via stdio?
Regarding to tools, should users tell us which tools they want? Maybe either by indicating the exact tool name or providing a regex (e.g. name prefix) for that tool.
I would say the MCP tools should take into account both the tool itself and its original type (Postgres, MySQL, etc.).
My question might be a little confusing. Let me clarify here: So if a user connects Toolbox (as a client) to a Server that provides a list of tools. There's multiple ways we can add tools from the Server into Toolbox: (1) Adding all tools that's available in the Server into Toolbox by default. (2) Set a toolset name when defining the source and add all the tool into that toolset (similar to the original proposal). (3) Allow user to indicate which tools to import (users can provide exact tool name or name-prefixes). This way we don't import all tools.
Hey @Yuan325,
I'd be happy to help contribute if this is in scope. I helped arch and build out something similar in LiteLLM proxy. The goal was to have a registry of "mcp servers" like you are suggesting.
You can find the supporting transport types and auth types here. Likewise, the server definition looked like this:
class MCPServer(BaseModel):
server_id: str
name: str
url: str
transport: MCPTransportType
spec_version: MCPSpecVersionType
auth_type: Optional[MCPAuthType] = None
authentication_token: Optional[str] = None
Maybe what @tuannvm is trying to say, there needs to be a client like the one I created here that also supports the various transports from the server definition ☝ .
@wagnerjt We welcome open source contributors for this feature once an implementation is finalized. Let's see if we hear back from @tuannvm :)
MCP just released their official go-sdk. We might be able to utilize this instead of implementing our own client.
@Yuan325 @wagnerjt per my understanding on the discussion the next action items would be:
- Configurable connection to multiple remote MCP servers
- Dynamic fetching and merging of remote tool definitions
- Basic conflict resolution for tool naming collisions
- Secure communication and authentication mechanisms
Is my understanding correct?
All the tools identified in (1), (2), and (3) need to be discovered and communicated to the user in a harmonized manner.
@wagnerjt if you're interested, please go ahead, and if you need to reference golang implementation you can check kubectl-ai
@Yuan325
MCP just released their official go-sdk. We might be able to utilize this instead of implementing our own client.
I'm all for this, but the team is looking like they won't have a stable sdk until August time just fyi. I've been using mark3labs sdk. Should we go this route, using only the client just to kick this off?
Supported transports
SSE is going to be deprecated in the future. Would it be worth to implement this? Likewise with stdio, it is really tough to support that once the executable is in a container.
Conflict resolution for tool collisions
This should be a global setting since it could impact any source type.
Authentication mechanisms / connecting to remote MCP servers
This is more challenging since each MCP server can define their own auth scheme as well as reject the initialize call if there is no correct auth in place. In my opinion, we should:
Create the new sources:
- For static keys like
PATorAPI_KEY - OAuth with types of Dynamic Client Registry
- OAuth with provided known public
ClientIdand optionallyClientSecret.
This supports the auth spec.
For implementation, we could break this OAuth flow into multiple issues, since it is more involved and focus on the first static keys source.
One other consideration is if we want to support when any clients want to bring their own OAuth token generated Out of Band (generating a token externally and sending it through)
@tuannvm
per my understanding on the discussion the next action items would be:
- Configurable connection to multiple remote MCP servers
- Dynamic fetching and merging of remote tool definitions
- Basic conflict resolution for tool naming collisions
- Secure communication and authentication mechanisms
Is my understanding correct?
Yes on (1) (2) and (3). For (4), Toolbox server doesn't support MCP OAuth so far, do you think it's worth adding this for the MCP of this feature?
@wagnerjt
I'm all for this, but the team is looking like they won't have a stable sdk until August time just fyi. I've been using mark3labs sdk. Should we go this route, using only the
clientjust to kick this off?
Unless there's any specific features that's lacking in go-sdk, I think it might be worth to implement with it rather than having to migrate it from mcp-go in the future.
Supported transports
SSE is going to be deprecated in the future. Would it be worth to implement this? Likewise with stdio, it is really tough to support that once the executable is in a container.
Ideally it will be best to support all transport protocol. We'll want to support SSE transport at the minimum incase the imported server does not support 2025 versions yet. stdio might be a little more complicated in implementation (especially if we are importing more than one server that uses stdio transport), but I could see this as a plus if we are able take servers via stdio and "convert" them to remote.
Conflict resolution for tool collisions
This should be a global setting since it could impact any
sourcetype.
For the MVP, might be good to keep this simple. (a) conflict between local Toolbox's tools vs imported server's tools: take precedence on local tool. (b) conflict between two imported server: last imported tool will be selected. We could possibly add more rules around this later, WDYT?
Authentication mechanisms / connecting to remote MCP servers
This is more challenging since each MCP server can define their own auth scheme as well as reject the
initializecall if there is no correct auth in place. In my opinion, we should:Create the new
sources:
- For static keys like
PATorAPI_KEY- OAuth with types of Dynamic Client Registry
- OAuth with provided known public
ClientIdand optionallyClientSecret.This supports the auth spec.
For implementation, we could break this OAuth flow into multiple issues, since it is more involved and focus on the first static keys source.
One other consideration is if we want to support when any clients want to bring their own OAuth token generated Out of Band (generating a token externally and sending it through)
Toolbox's current MCP implementation doesn't support OAuth. Do you think it's worth to support this for the MVP? I might be wrong but I don't see much MCP Server supporting MCP Auth yet.
@Yuan325
Using go-sdk sounds good to me and I can start off with one of the transports that the go-sdk supports and run with that. Your tool collision sounds flow sounds fine to me for now. Maybe in the future we would want to use the tool path sourceIdentifier/tool_name. I don't think we need to support OAuth atm since you're correct, most Authorization servers don't support the OAuth 2.1 spec with DSR. However, we can just scope it down to MCP servers without auth and the new static source secret type for PAT and API_KEY like I mentioned above.
If all of this sounds good, I can get started next week. I wanted to preface that I am still early in learning Go, but would still like to tackle this. I completely understand if someone else wants to take it over because of this
I have been able to get this functioning over SSE with a mcp server without auth. I need to do a bit more cleanup and additional functionality + add more tests before I submit a draft PR. I also ran into the following go-sdk issues I came across.
Supporting issues:
@wagnerjt
Using
go-sdksounds good to me and I can start off with one of the transports that the go-sdk supports and run with that. Your tool collision sounds flow sounds fine to me for now. Maybe in the future we would want to use the tool pathsourceIdentifier/tool_name. I don't think we need to support OAuth atm since you're correct, most Authorization servers don't support the OAuth 2.1 spec with DSR. However, we can just scope it down to MCP servers without auth and the new static source secret type forPAT and API_KEYlike I mentioned above.If all of this sounds good, I can get started next week. I wanted to preface that I am still early in learning Go, but would still like to tackle this. I completely understand if someone else wants to take it over because of this
Sounds good! The tool path options sounds good, but I'm wondering if this might confuse the LLM (if both tools are very similar in functionality, it might not pick the right tool). Or I guess this will be up to the user to decide how they word their description. Regardless, it's something that we definitely want to talk more about. Previous discussions on the ability to configure (ignore/overwrite etc.) might be a great resolution. We might be able to hear more opinions from users once this feature is out and people start using it! :)
We might also want to add a new tool for the mcp-server sources since that's how we trigger a tool invocation.
Just to finalize and make sure that we're on the same page, this will be the configuration of source:
sources:
remote-mcp-a:
kind: mcp-server
endpoint: https://mcp-a.example.com
secretKey: {secret} <- is this good for either PAT / API_KEY? Or should we have separate ones for them?
transport: "httpsse" <- this could be one of { "httpsse" / "shttp" / "stdio" }
During initialization of the source, we will run the initialization lifecycle with the remote-mcp-a.
After the initialization lifecycle, for each tools within remote-mcp-a, we will create a new tool.
User won't have to define them in the tools.yaml configuration file since we will create those for them.
Hiya @Yuan325
The tool path options flow
I am running with your tool flow suggestion. And will keep the tool name as-is. We can extend upon this with other feedback. (a) conflict between local Toolbox's tools vs imported server's tools: take precedence on local tool. (b) conflict between two imported server: last imported tool will be selected.
There were some learnings with using the / as the prefix as some integrations supported this in the tool name, but others do not.
We might also want to add a new tool for the mcp-server sources since that's how we trigger a tool invocation.
I actually started with this and went a different route. I still have the tool interface built out, but it is not initialized in the as normal. I opted for a new interface that identifies the source that can create tools dynamically. This will be a big point of feedback -- exactly where and how this should be used (in go standards). This allows no tool definitions to your point: User won't have to define [tools] in the tools.yaml configuration file since we will create those for them.
And the source shape looks like:
sources:
my-mcp-server:
kind: mcp-server
endpoint: http://127.0.0.1:8080/mcp
transport: http
specVersion: 2025-03-26
authMethod: bearer
authSecret: ${MCP_SECRET}
Reference
| field | type | required | description |
|---|---|---|---|
| kind | string | true | Must be "mcp-server". |
| endpoint | string | true | Connect Uri - http://127.0.0.1/mcp |
| specVersion | string | false | One of the supported mcp specification versions. |
| transport | string | false | One of the supported mcp transport types. |
| authMethod | string | false | One of the supported auth method types. |
| authSecret | string | false | The secret value used along with the authMethod. |
Let me create the draft PR shortly so we can start having a discussion around the specifics in the documentation.
From all of this, I have more ideas now, and would love to have a separate discussion around these topics.
@wagnerjt
I am running with your tool flow suggestion. And will keep the tool name as-is. We can extend upon this with other feedback. (a) conflict between local Toolbox's tools vs imported server's tools: take precedence on local tool. (b) conflict between two imported server: last imported tool will be selected.
There were some learnings with using the
/as the prefix as some integrations supported this in the tool name, but others do not.
Good to know regarding this, agree to keep the tool name conflict resolution simple now and extend it upon feedback.
I actually started with this and went a different route. I still have the
toolinterface built out, but it is not initialized in the as normal. I opted for a new interface that identifies the source that can create tools dynamically. This will be a big point of feedback -- exactly where and how this should be used (in go standards). This allows no tool definitions to your point:User won't have to define [tools] in the tools.yaml configuration file since we will create those for them.
Would love to know more about the new interface if you have more details on this.
And the source shape looks like:
sources: my-mcp-server: kind: mcp-server endpoint: http://127.0.0.1:8080/mcp transport: http specVersion: 2025-03-26 authMethod: bearer authSecret: ${MCP_SECRET}
Reference
field type required description kind string true Must be "mcp-server". endpoint string true Connect Uri -
http://127.0.0.1/mcpspecVersion string false One of the supported mcp specification versions. transport string false One of the supported mcp transport types. authMethod string false One of the supported auth method types. authSecret string false The secret value used along with theauthMethod. Let me create the draft PR shortly so we can start having a discussion around the specifics in the documentation.
Sounds good, the source config LGTM. :) I'm assuming for specVersion, if it's not noted, we'll use the latest supported version?
From all of this, I have more ideas now, and would love to have a separate discussion around these topics.
Sg, lets hear it! Do you want to continue those discussion in this thread or is it a separate feature?
@Yuan325 I created the draft pr for this already. The description has more of the interface definition and the mcpservers.md file contains the lower details! For the specVersion, I am totally fine to default to the latest supported version, and let the client fallback like the discussion in the go-sdk
As for other ideas, I can create another issue / feature and have discussions there.
I have been able to:
- Bring in the input schema if defined
- Bring in the output schema if defined
- Validate using the various auth methods over SSE
Items that are pending:
- The streamable http support is only functioning for the mcp servers defined with
"text/event-stream"until this issue is resolved - stdio support
@wagnerjt Sounds good, thank you! Sorry it's taking awhile to review your PR. I'll get back to you tomorrow on the draft PR once discussed with my team regarding implementing tool for mcp-server within source. :)
Hi @wagnerjt ! :)
The source structs that are defined in the PR looks good. However, there are some further thoughts that needs to be finalized for tools retrieved from mcp-servers sources.
The capability to import all tools from an external server and set it as tools within Toolbox doesn't bring much values to users since users can just add those servers directly into their clients. And if we look into how Toolbox can bring values to user (or how we differ from other server/mcp), it will bring us to two main differentiator: (1) Toolbox auth flow that allows authenticated parameters, and (2) toolset capabilities (seems like this is one of the main motivation of the original proposal of this feature). Feel free to lmk if there's other benefits of importing all tools by default into Toolbox. Would love to hear other use cases on how we can bring value towards users (if you have any), or the benefits of importing all tools from an external server by default.
With that in mind, here are some designs for tools:
- For MVP, we can just import specific tools that are defined by user. Within
tools.yaml, users can define the specific tool that they would like to import (user have to import each tool that they would like to use):
tools:
mcp-weather-tool:
kind: mcp-tool
source: my-weather-mcp-server
toolName: weather-tool # this will be the name of the mcp server's tool
With this, users can also incorporate the tool into existing toolsets:
toolsets:
my-toolset:
- flight-tool # local Toolbox tool
- mcp-weather-tool # imported tool
User can also use Toolbox as a proxy for the imported tool. With this, user can set to use one of the authenticated parameter within Toolbox.
tools:
mcp-weather-tool:
kind: mcp-tool
source: my-weather-mcp-server
toolName: weather-tool
parameters: # other parameters that are not listed will not be sent during invocation
- name: city
type: string
description: name of the city.
- name: user_name # user can utilize the authenticated parameter capability in Toolbox
type: string
description: auto populated from google login
authServices:
- name: my-google-auth
field: sub
- After getting the MVP, we can further discuss on how to import a set of tools directly from the
mcp-server. Current thoughts -- there are two ways we can do this:
a. Import via tools:
tools:
my-mcp-toolset: # This will automatically import the list of tools that matches the regex into a new toolset
kind: mcp-toolset
source: my-mcp-server
regexMatch: "my_tools.*"
reloadTool: 30s # reload tool every 30s
b. Import via toolsets:
toolsets:
my-mcp-toolset:
kind: mcp-regex
source: my-mcp-server
regexMatch: "my_tools.*" # can either use regex or prefix to match tool name
prefix: "some_prefix"
reloadTool: 30s # reload tool every 30s
Conflict resolution can also build on top of this (e.g. adding a field overwrite: true/false indicate that the imported tool will overwrite local tool, defaulted to false)
Let me know what are your thoughts on this.
Hey @Yuan325, thanks for the write up! I like the regex on the toolset idea! I will start working towards the tool definition in your part 1 design and I and can move the dynamic tool interface closer to the toolset for reuse so it can be setup for the latter half of the part 2!
I understand what you are going for on the redefining the parameters on the weather-tool example, but I need clarification regarding other parameters that are not listed will not be sent during invocation.
What is preferred:
- Do we take the parameter definition as source truth for the tool's input schema?
- Do we merge what is defined in the parameters from the
tools/listcall?
Ex:
parameters: # other parameters that are not listed will not be sent during invocation
- name: city
type: string
description: name of the city.
- name: user_name # user can utilize the authenticated parameter capability in Toolbox
type: string
description: auto populated from google login
authServices:
- name: my-google-auth
field: sub
Where the real tool definition also includes a:
- name: time_of_day
type: int64
description: unix timestamp
Opt 1 leaves the responsibility to the toolbox owner to maintain each tool arg and they can lose fidelity of what args are required per MCP spec. I'd personally go with opt 2 since we will be refreshing against the tool's definitions in the future.
other benefits of importing all tools by default into Toolbox
I can add one point to the following statement which is more of a proxy / registry more than anything. I was even thinking of just naturally any mcp-server source would automatically create a default toolset for all the tools under the source name, but the regex is more explicit and anyone can always do .* if they want to opt-in for that behavior which feels cleaner
@wagnerjt The main motivation for opt 1 is incase user wants to remove a certain parameter. In that case, it will be either (1) specify no params - Toolbox will mirror the parameters from mcp-tool, or (2) specify all params that user want to use.
If we go for opt 2, what will be the implementation when user wants to remove a parameter?
Thanks for the clarifications! Option 2 would not have the ability to remove or exclude parameters, and if that the is the main goal then the behavior in opt 1 makes sense! I'll make sure to document this expected behavior accordingly
Sorry Yuan. I haven't had a moment this week to tackle this. I'll try to spend some time this weekend and hope to have another review sometime next week
@wagnerjt Sounds good! Let me know once it's ready and I'll take a look :)
@Yuan325 and @wagnerjt, thanks for your great effort! I have some time this weekend; please let me know if there's anything I can help with.
Hey @tuannvm, sure thing. If you want to pick up from my branch go for it, or if you want to take it over as well that is okay as well! Unfortunately I haven't had extra time since I first created the draft, and I probably won't be able to pick it back up until the middle of the week. Let me know and I'll give you some notes on the official MCP repo, since it might be worth to exclude some functionality for the first version