litellm icon indicating copy to clipboard operation
litellm copied to clipboard

Add support for together.ai image generation

Open tombii opened this issue 9 months ago • 4 comments

Title

Together.ai will add id, model and object to the response.

Type

🐛 Bug Fix

Changes

Added optional parameters that together.ai sends with it's response.

{'response_object': {'created': None, 'data': [{'b64_json': '/9j/.../9k=', 'revised_prompt': None, 'url': None, 'timings': {'inference': 0.9873751439154148}, 'index': 0}], 'id': '913e3620e9xxxxxx-LOC', 'model': 'black-forest-labs/FLUX.1-schnell-Free', 'object': 'list', 'hidden_params': {'additional_headers': {}}}, 'model_response_object': ImageResponse(created=1739883745, data=[], usage=Usage(completion_tokens=0, prompt_tokens=0, total_tokens=0, completion_tokens_details=None, prompt_tokens_details=None)), 'response_type': 'image_generation', 'stream': False, 'start_time': None, 'end_time': None, 'hidden_params': None, '_response_headers': None, 'convert_tool_call_to_json_mode': None }

tombii avatar Feb 18 '25 14:02 tombii

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Apr 14, 2025 8:58am

vercel[bot] avatar Feb 18 '25 14:02 vercel[bot]

Hi @tombii is there a reason the existing flow does not work?

krrishdholakia avatar Feb 21 '25 06:02 krrishdholakia

@krrishdholakia As you can see below, the response_object contains id, model and object which are not in the init and therefore throws an error. Maybe there are better ways to do this but the provided patch fixes the issue for me.

Here's the error message:

received_args={'response_object': {'created': None, 'data': [{'b64_json': '/9j/.../2Q==', 'revised_prompt': None, 'url': None, 'timings': {'inference': 0.9612685777246952}, 'index': 0}], 'id': '91559891cxxx-PDX', 'model': 'black-forest-labs/FLUX.1-schnell-Free', 'object': 'list', 'hidden_params': {'additional_headers': {}}}, 'model_response_object': ImageResponse(created=1740128949, data=[], usage=Usage(completion_tokens=0, prompt_tokens=0, total_tokens=0, completion_tokens_details=None, prompt_tokens_details=None)), 'response_type': 'image_generation', 'stream': False, 'start_time': None, 'end_time': None, 'hidden_params': None, '_response_headers': None, 'convert_tool_call_to_json_mode': None}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.13/site-packages/litellm/router.py", line 3155, in async_function_with_retries
    response = await self.make_call(original_function, *args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/litellm/router.py", line 3271, in make_call
    response = await response
               ^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1763, in _aimage_generation
    raise e
  File "/usr/lib/python3.13/site-packages/litellm/router.py", line 1750, in _aimage_generation
    response = await response
               ^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1394, in wrapper_async
    raise e
  File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1253, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/litellm/main.py", line 4443, in aimage_generation
    raise exception_type(
          ~~~~~~~~~~~~~~^
        model=model,
        ^^^^^^^^^^^^
    ...<3 lines>...
        extra_kwargs=kwargs,
        ^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2207, in exception_type
    raise e  # it's already mapped
    ^^^^^^^
  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 463, in exception_type
    raise APIConnectionError(
    ...<7 lines>...
    )
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: APIConnectionError: OpenAIException - Invalid response object Traceback (most recent call last):
  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/llm_response_utils/convert_dict_to_response.py", line 586, in convert_to_model_response_object
    return LiteLLMResponseObjectHandler.convert_to_image_response(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        response_object=response_object,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        model_response_object=model_response_object,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        hidden_params=hidden_params,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/llm_response_utils/convert_dict_to_response.py", line 264, in convert_to_image_response
    model_response_object = ImageResponse(**model_response_dict)
TypeError: ImageResponse.__init__() got an unexpected keyword argument 'id'

tombii avatar Feb 21 '25 09:02 tombii

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Apr 22 '25 22:04 CLAassistant

I am encountering the same issue with the model black-forest-labs/FLUX.1-dev (“TypeError: ImageResponse.init() got an unexpected keyword argument 'id'”). Could you please advise when these changes might be available in production?

Setom29 avatar Jul 15 '25 14:07 Setom29

This is already resolved with the new ImageResponse object - https://github.com/BerriAI/litellm/blob/197e7efa8f097bb935cf86dc4100422487a40955/litellm/types/utils.py#L1606

Tested the input you shared, and i can see it working

Screenshot 2025-07-15 at 8 25 37 PM

krrishdholakia avatar Jul 16 '25 03:07 krrishdholakia

@Setom29 what version are you on? please try v1.74.3 and let me know if that solves it.

krrishdholakia avatar Jul 16 '25 03:07 krrishdholakia

@krrishdholakia Thank you for the suggestion. I was using version 1.69.3, which I had installed not so long ago and expected to be up to date. After updating to v1.74.3, everything is working as expected.

Setom29 avatar Jul 16 '25 10:07 Setom29