litellm icon indicating copy to clipboard operation
litellm copied to clipboard

LiteLLM cannot handle image responses from TogetherAI's text2image model Stabeldiffusion due to an additional 'ID' field being sent from togetherAI in the response

Open JosefAschauer opened this issue 10 months ago • 0 comments

TogetherAI sends an additional 'ID field':

litellm_1 | ) litellm_1 | ^ litellm_1 | File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/llm_response_utils/convert_dict_to_response.py", line 278, in convert_to_image_response litellm_1 | model_response_object = ImageResponse(**model_response_dict) litellm_1 | TypeError: ImageResponse.init() got an unexpected keyword argument 'id' litellm_1 | litellm_1 | litellm_1 | received_args={'response_object': {'created': None, 'data': [{'b64_json': '/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAQABAAD......

model's config:

  • model_name: together_ai/stable-diffusion-xl litellm_params: model: "together_ai/stabilityai/stable-diffusion-xl-base-1.0" api_base: "https://api.together.xyz/v1" api_key: tgp_v1_-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx response_format: "base64" # Ensure base64 output model_info: mode: image_generation

Why This Happens Together AI Response Format: Together AI’s /v1/images/generations endpoint returns a JSON object like {"created": null, "data": [{"b64_json": "..."}], "id": "..."}. The 'id' field is part of their response but not part of the standard OpenAI image generation response spec that litellm expects ({"created": ..., "data": [{"b64_json": "..."}]}). litellm Bug: The ImageResponse class in litellm (at convert_dict_to_response.py:278) isn’t designed to handle extra fields like 'id', causing a TypeError. This is a compatibility issue between Together AI’s response and litellm’s parsing logic.

JosefAschauer avatar Mar 14 '25 10:03 JosefAschauer