pi-mono icon indicating copy to clipboard operation
pi-mono copied to clipboard

Bedrock: openai.gpt-oss-120b-1:0 fails with reasoningContent.reasoningText.signature error

Open fcatuhe opened this issue 4 days ago • 0 comments

Bug Description

When using openai.gpt-oss-120b-1:0 via Amazon Bedrock, requests fail with:

Error: This model doesn't support the reasoningContent.reasoningText.signature field. Remove reasoningContent.reasoningText.signature and try again.

This occurs even on a fresh session with no prior conversation history.

Note: We only tested with openai.gpt-oss-120b-1:0. Other non-Claude Bedrock models may have the same issue.

Context

The Bedrock test suite shows this model passes the basic test:

✓ should make a simple request with openai.gpt-oss-120b-1:0

However, when used through pi (the coding agent), the error occurs. This suggests the issue may be triggered by specific context or options that pi uses but the basic test doesn't.

Analysis

The error indicates that a reasoningContent.reasoningText.signature field is being sent in the request, which this model doesn't support. Potential sources:

  1. Message conversion (convertMessages in amazon-bedrock.ts lines 353-360): Includes reasoningContent with signature when processing thinking blocks from assistant messages

  2. Missing transformMessages call: Unlike all other providers, the Bedrock provider doesn't call transformMessages() which handles cross-provider message normalization (converting thinking blocks to plain text when switching models)

  3. Request configuration: Something in the request building that's including reasoning-related fields

Comparison with Other Providers

All other providers use transformMessages() for message normalization:

  • anthropic.ts
  • openai-completions.ts
  • openai-responses.ts
  • openai-codex-responses.ts
  • google-shared.ts
  • amazon-bedrock.tsmissing

Additionally, even if Bedrock used transformMessages, the current logic only checks provider and api, not the specific model. Switching between models within Bedrock (same provider, same API) wouldn't trigger transformation.

Proposed Solutions

Option 1: Add transformMessages to Bedrock + enhance it

  • Add transformMessages call to Bedrock provider for consistency
  • Enhance transformMessages to also consider model capabilities, not just provider/api
  • Pro: Centralizes cross-model compatibility logic
  • Con: Requires changes to shared transformation logic

Option 2: Bedrock-specific fix

  • In convertMessages, check if target model supports reasoningContent (e.g., model ID contains "anthropic.claude") before including thinking blocks
  • Pro: Localized change
  • Con: Duplicates logic

Option 3: Hybrid approach

  • Add transformMessages for consistency
  • Add Bedrock-specific check for intra-provider model differences

Request for Guidance

Which fix approach would you prefer?


Investigation performed with AI assistance (Claude Opus 4.5)

fcatuhe avatar Jan 13 '26 10:01 fcatuhe