spring-ai icon indicating copy to clipboard operation
spring-ai copied to clipboard

BedrockProxyChatModel returns null response for OpenAI GPT OSS models on Bedrock Converse API

Open ybezsonov opened this issue 1 week ago • 1 comments
trafficstars

Please do a quick search on GitHub issues first, there might be already a duplicate issue for the one you are about to create. If the bug is trivial, just go ahead and create the issue. Otherwise, please take a few moments and fill in the following sections:

Bug description

BedrockProxyChatModel returns null response when using OpenAI GPT models via AWS Bedrock Converse API, while Claude models work correctly. The issue occurs because GPT models return multiple ContentBlocks (ReasoningContent + Text), but the code only extracts text from the first block without checking if it's null.

Environment

  • Spring AI version: 1.0.3
  • Java version: 21
  • AWS Bedrock Converse API
  • Model: openai.gpt-oss-120b-1:0 (gpt-oss-120b via Bedrock)
  • Working model: anthropic.claude-sonnet-4-20250514-v1:0 (for comparison)

Steps to reproduce

  1. Configure Spring AI with AWS Bedrock Converse API
  2. Set model to any OpenAI GPT model (e.g., openai.gpt-oss-120b-1:0)
  3. Send a simple chat request like "Who are you?"
  4. Observe that the response content is null

Expected behavior

The chat response should contain the text from the GPT model, similar to how Claude models return their responses.

Root Cause

In BedrockProxyChatModel.java (line 670-677), the toChatResponse method processes ContentBlocks:

List<Generation> generations = message.content()
    .stream()
    .filter(content -> content.type() != ContentBlock.Type.TOOL_USE)
    .map(content -> new Generation(
            AssistantMessage.builder().content(content.text()).properties(Map.of()).build(),
            ChatGenerationMetadata.builder().finishReason(response.stopReasonAsString()).build()))
    .toList();

Debug logs show the difference:

GPT model response:

Content=[ContentBlock(ReasoningContent=*** Sensitive Data Redacted ***), ContentBlock(Text=I'm ChatGPT...)]

Claude model response:

Content=[ContentBlock(Text=I'm Claude...)]

The GPT model returns two ContentBlocks:

  1. A ReasoningContent block (which has text() returning null)
  2. A Text block with the actual response

The code calls .text() on every ContentBlock without checking for null, causing it to extract null from the first ReasoningContent block.

Proposed Solution

Add a filter to skip ContentBlocks that don't have text content:

List<Generation> generations = message.content()
    .stream()
    .filter(content -> content.type() != ContentBlock.Type.TOOL_USE)
    .filter(content -> content.text() != null)  // Add this line
    .map(content -> new Generation(
            AssistantMessage.builder().content(content.text()).properties(Map.of()).build(),
            ChatGenerationMetadata.builder().finishReason(response.stopReasonAsString()).build()))
    .toList();

This ensures only ContentBlocks with actual text are processed, fixing the null response for GPT models while maintaining compatibility with Claude and other models.

Minimal Complete Reproducible example

@SpringBootApplication
public class AiAgentApplication {

    public static void main(String[] args) {
        ConfigurableApplicationContext context = SpringApplication.run(AiAgentApplication.class, args);

        ChatClient.Builder chatClientBuilder = context.getBean(ChatClient.Builder.class);
        ChatClient chatClient = chatClientBuilder.build();

        String question = "Who are you?";
        System.out.println("USER: " + question);

        String response = chatClient.prompt(question)
            .call()
            .content();

        System.out.println("ASSISTANT: " + response);  // Prints "null" with GPT models

        context.close();
    }
}

application.properties:

spring.ai.bedrock.aws.region=us-east-1
spring.ai.bedrock.converse.chat.options.max-tokens=10000
spring.ai.bedrock.converse.chat.options.model=openai.gpt-oss-120b-1:0
logging.level.org.springframework.ai=DEBUG

Actual output with GPT model:

ASSISTANT: null

Expected output (and what Claude returns):

ASSISTANT: I'm ChatGPT, a conversational AI created by OpenAI...

ybezsonov avatar Nov 12 '25 19:11 ybezsonov