servers icon indicating copy to clipboard operation
servers copied to clipboard

fix(sequential-thinking): Add input sanitization for numeric parameters

Open ketema opened this issue 3 months ago • 15 comments

Problem

Some MCP clients may serialize numeric parameters as strings in JSON payloads, causing validation errors with the current strict type checking.

Error Message:

{
  "error": "Invalid thoughtNumber: must be a number",
  "status": "failed"
}

Root Cause: The validateThoughtData() method performs strict type checking (typeof data.thoughtNumber !== 'number'), which fails when clients pass "1" (string) instead of 1 (number).

Solution

Add input sanitization to coerce valid numeric strings to numbers before validation, while maintaining type safety.

Changes

  • Added sanitizeNumericParam() private helper method
  • Sanitizes thoughtNumber, totalThoughts, revisesThought, branchFromThought
  • Coerces valid numeric strings (e.g., "1") to numbers (e.g., 1)
  • Preserves original validation logic after sanitization
  • Returns invalid values as-is for clear error messages

Testing

  • Handles number inputs: 11 (pass through)
  • Handles string inputs: "1"1 (coerce)
  • Handles invalid inputs: "abc""abc" (validation fails with clear error)

Impact

  • ✅ Improves client compatibility
  • ✅ Maintains type safety
  • ✅ No breaking changes
  • ✅ Clear error messages preserved

Example

Before:

// Client sends: { thoughtNumber: "1", ... }
// Result: Error: Invalid thoughtNumber: must be a number

After:

// Client sends: { thoughtNumber: "1", ... }
// Sanitized to: { thoughtNumber: 1, ... }
// Result: Success

Testing Evidence

Tested with Augment.AI MCP client:

  • ✅ Accepts thoughtNumber: 1 (number)
  • ✅ Accepts thoughtNumber: "1" (string, coerced to number)
  • ✅ Rejects thoughtNumber: "abc" (invalid, clear error)
  • ✅ Thought history maintained correctly
  • ✅ No regressions observed

Additional Context

This fix addresses non-deterministic validation errors observed with MCP clients that use LLM-generated tool calls, where numeric parameters may be serialized as strings depending on the LLM's output format.

The sanitization approach:

  1. Checks if value is already a number → pass through
  2. Checks if value is a valid numeric string → parse and return number
  3. Otherwise → return as-is, let validation fail with clear error

This maintains the original validation behavior while improving compatibility with clients that serialize numbers as strings.

ketema avatar Oct 03 '25 09:10 ketema

Thanks for the review, @Copilot!

Addressing your feedback:

✅ Comment 1: Regex inconsistency with zero - FIXED

You're absolutely right about the inconsistency. The regex /^\d+$/ allowed '0' to match, but the parsed > 0 condition rejected it.

Fix applied: Updated regex to /^[1-9]\d*$/ to explicitly exclude zero.

Rationale: Thought numbers are semantic indices representing "which thought in the sequence" (1st, 2nd, 3rd, etc.). Zero doesn't make sense in this context - thoughts are 1-indexed, not 0-indexed.

❌ Comment 2: Negative numbers and decimals - Not applicable

The suggestion to handle negative numbers and decimals doesn't apply here because:

  1. Semantic context: These parameters represent thought indices (thoughtNumber, totalThoughts, etc.), not arbitrary numeric values
  2. Negative numbers: Already prevented by the regex (no minus sign in pattern)
  3. Decimals: Don't make sense for "which thought number" (can't have the 1.5th thought)

The function name sanitizeNumericParam is accurate - it sanitizes numeric parameters that should be positive integers starting from 1.


Updated behavior:

  • ✅ Accepts: '1'1, '2'2, '999'999
  • ❌ Rejects: '0' (fails regex), '-1' (fails regex), '1.5' (fails regex)

The fix maintains type safety while being explicit about the valid range for thought indices.

ketema avatar Oct 03 '25 10:10 ketema

I don’t have access to the clients and have noticed this issue in both Copilot, Augment, and Gemini. I felt this was simple fix for any client.

On Fri, Oct 10, 2025 at 04:27 adam jones @.***> wrote:

@.**** requested changes on this pull request.

I think we don't plan to accept this change - this seems like a model bug that it is passing through the wrong shaped parameters. If we do want to correct this for poorly performing models, the place to do that would probably be the client anyways where it can be implemented once rather than server side.

— Reply to this email directly, view it on GitHub https://github.com/modelcontextprotocol/servers/pull/2812#pullrequestreview-3322781326, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB7TGHCTA6H4L26MOW72HL3W6JZPAVCNFSM6AAAAACIGC5WT2VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTGMRSG44DCMZSGY . You are receiving this because you authored the thread.Message ID: @.***>

ketema avatar Oct 10 '25 17:10 ketema

I don’t have access to the clients and have noticed this issue in both Copilot, Augment, and Gemini. I felt this was simple fix for any client.

@ketema out of curiosity, what models are you using when this happens?

olaservo avatar Oct 13 '25 01:10 olaservo

It has mostly happened with Anthropic Claude models and mostly with the Augment.ai agent, but as mentioned I have had it happen with copilot and Claude and once with Gemini in an early 1.5 flash model, although I wasn’t keeping strict count. It just happened enough that I decided to ask why then make the patch.

On Sun, Oct 12, 2025 at 21:15 Ola Hungerford @.***> wrote:

@.**** commented on this pull request.

Agree with Adam that not sure if forcing the values is the right thing to do here. I did have another question though about what models you're using when this happens.

In src/sequentialthinking/.npmrc https://github.com/modelcontextprotocol/servers/pull/2812#discussion_r2425070096 :

@@ -0,0 +1,19 @@ +# Local NPM configuration for Sequential Thinking MCP Server

Was this intentionally checked in?

In src/sequentialthinking/SCHEMA_FIX_EXPLANATION.md https://github.com/modelcontextprotocol/servers/pull/2812#discussion_r2425070202 :

@@ -0,0 +1,221 @@ +# Sequential Thinking MCP Server - Schema Fix Explanation

Was this intentionally checked in?

— Reply to this email directly, view it on GitHub https://github.com/modelcontextprotocol/servers/pull/2812#pullrequestreview-3329772855, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB7TGBOEQCEZVBHKVKB7I33XL4KHAVCNFSM6AAAAACIGC5WT2VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTGMRZG43TEOBVGU . You are receiving this because you were mentioned.Message ID: @.***>

ketema avatar Oct 13 '25 08:10 ketema

Sorry about the .npmrc i will remove it

On Mon, Oct 13, 2025 at 04:13 Ketema Harris @.***> wrote:

It has mostly happened with Anthropic Claude models and mostly with the Augment.ai agent, but as mentioned I have had it happen with copilot and Claude and once with Gemini in an early 1.5 flash model, although I wasn’t keeping strict count. It just happened enough that I decided to ask why then make the patch.

On Sun, Oct 12, 2025 at 21:15 Ola Hungerford @.***> wrote:

@.**** commented on this pull request.

Agree with Adam that not sure if forcing the values is the right thing to do here. I did have another question though about what models you're using when this happens.

In src/sequentialthinking/.npmrc https://github.com/modelcontextprotocol/servers/pull/2812#discussion_r2425070096 :

@@ -0,0 +1,19 @@ +# Local NPM configuration for Sequential Thinking MCP Server

Was this intentionally checked in?

In src/sequentialthinking/SCHEMA_FIX_EXPLANATION.md https://github.com/modelcontextprotocol/servers/pull/2812#discussion_r2425070202 :

@@ -0,0 +1,221 @@ +# Sequential Thinking MCP Server - Schema Fix Explanation

Was this intentionally checked in?

— Reply to this email directly, view it on GitHub https://github.com/modelcontextprotocol/servers/pull/2812#pullrequestreview-3329772855, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB7TGBOEQCEZVBHKVKB7I33XL4KHAVCNFSM6AAAAACIGC5WT2VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTGMRZG43TEOBVGU . You are receiving this because you were mentioned.Message ID: @.***>

ketema avatar Oct 13 '25 08:10 ketema

Hi @ketema and @domdomegg I'm seeing users report this issue happening with Claude and popular clients, so I'm thinking we do need to force these inputs to int? I also added tests for this server in a separate PR, so lets update this branch to grab the latest from main if we think we want to address this in the server.

olaservo avatar Oct 29 '25 04:10 olaservo

Hi @ketema and @domdomegg I'm seeing users report this issue happening with Claude and popular clients, so I'm thinking we do need to force these inputs to int? I also added tests for this server in a separate PR, so lets update this branch to grab the latest from main if we think we want to address this in the server.

I agree, but maintainers do not. let me know and I can clean this up and resolve the conflicts that are now present.

ketema avatar Oct 29 '25 12:10 ketema

I'm seeing users report this issue happening with Claude and popular clients

Do you have more details on this? Chat share uuids or transcripts would be especially helpful. Looking at our metrics we don't seem to see this. And if this is a problem we'd really like to fix it in our model tool calling behaviours. (We can also fix it in the server as a workaround... maybe even do some coercion in the MCP SDKs rather than specific servers if this is a widespread problem)

domdomegg avatar Oct 29 '25 13:10 domdomegg

Hello all. I would like to kindly ask what is the status of this PR? Is there an ETA?

iraklisg avatar Nov 06 '25 09:11 iraklisg

Any update on this? This MCP server has become completely unusable due to this issue.

arjunswaj avatar Nov 12 '25 07:11 arjunswaj

Screenshot 2025-11-13 at 4 07 19 PM

I am also encountering the same issue in my augment code agent.

mrmarufpro avatar Nov 13 '25 10:11 mrmarufpro

The fix is there. But maintainers have said this is a client issue. I just forked and run my branch.

On Thu, Nov 13, 2025 at 05:09 Maruf Ahmed @.***> wrote:

mrmarufpro left a comment (modelcontextprotocol/servers#2812) https://github.com/modelcontextprotocol/servers/pull/2812#issuecomment-3526947496 Screenshot.2025-11-13.at.4.07.19.PM.png (view on web) https://github.com/user-attachments/assets/f61febf3-4e62-4a1e-991e-17b810937d2f

I am also encountering the same issue in my augment code agent.

— Reply to this email directly, view it on GitHub https://github.com/modelcontextprotocol/servers/pull/2812#issuecomment-3526947496, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB7TGEMKDOZWKOLDJ55SQD34RKFRAVCNFSM6AAAAACIGC5WT2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTKMRWHE2DONBZGY . You are receiving this because you were mentioned.Message ID: @.***>

ketema avatar Nov 13 '25 15:11 ketema

constantly hitting this too with the latest version of claude code. Thanks for the fix @ketema !

bxm156 avatar Nov 13 '25 22:11 bxm156

Heya, if you're hitting issues can you share the chat transcripts that led up to this + how frequent this is (e.g. is it 10% of the time, 50% of the time or 90% of the time)? I'm still keen to get to the bottom of this so we can implement a robust fix that works for all parameters across all servers.

domdomegg avatar Nov 17 '25 12:11 domdomegg

https://docs.claude.com/en/docs/build-with-claude/structured-outputs just saw this. This will probably be the way to resolve this issue formally. My fix works, and I use it, but this is probably the correct way to do it.

ketema avatar Nov 18 '25 00:11 ketema

Closing because maintainers have stated they will not merge.

ketema avatar Dec 30 '25 13:12 ketema