bytebot icon indicating copy to clipboard operation
bytebot copied to clipboard

(Feature Request) Add OpenRouter integration

Open WardenPro opened this issue 3 months ago • 7 comments

Please add OpenRouter integration to allow using multiple AI models through a single API. This will make it easier to switch models and manage costs within Bytebot.

WardenPro avatar Sep 06 '25 10:09 WardenPro

Have you tied using the lite-llm proxy version? I have tried adding a openai custom endpoint alternative but no luck. Some guides in there docs would be very helpful.

gabriellemon avatar Sep 07 '25 16:09 gabriellemon

noone else uses this? why 7 rockets...

ThyannSeng avatar Sep 08 '25 09:09 ThyannSeng

I have it working with litellm built-in proxy. You'll have to compile the list of vision models from openrouter and add them to litellm config.

bytebot_lite_llm_open_router_option_a_proxy_compose_runbook.md

Your litellm config should look something like this:

  • model_name: grok-2-vision-1212 litellm_params: model: openrouter/x-ai/grok-2-vision-1212 api_key: os.environ/OPENROUTER_API_KEY
  • model_name: llama-3.2-90b-vision-instruct litellm_params: model: openrouter/meta-llama/llama-3.2-90b-vision-instruct api_key: os.environ/OPENROUTER_API_KEY
  • model_name: llama-vision-11b-free litellm_params: model: openrouter/meta-llama/llama-3.2-11b-vision-instruct:free api_key: os.environ/OPENROUTER_API_KEY
  • model_name: qwen-72b-vision-free litellm_params: model: openrouter/qwen/qwen2.5-vl-72b-instruct:free api_key: os.environ/OPENROUTER_API_KEY
  • model_name: gemini-2.5-flash-image-preview litellm_params: model: openrouter/google/gemini-2.5-flash-image-preview api_key: os.environ/OPENROUTER_API_KEY

zhound420 avatar Sep 08 '25 17:09 zhound420

Hey @zhound420 , do you know how to add a custom openai compatible endpoint such as venice.ai or w/e?

gabriellemon avatar Sep 08 '25 18:09 gabriellemon

how to add a custom openai compatible endpoint such as venice.ai or w/e

You'll need to read up on litellm:

# packages/bytebot-llm-proxy/litellm-config.yaml
model_list:
  # --- Venice.ai example (OpenAI-compatible) ---
  - model_name: venice-coder-32b          # <alias you’ll see in Bytebot>
    litellm_params:
      model: openai/qwen2.5-coder-32b     # <remote vendor model id, prefixed with openai/>
      api_base: https://api.venice.ai/api/v1
      api_key: os.environ/VENICE_API_KEY  # pull from env
      # optional tunables:
      # request_timeout: 600
      # supports_vision: true

zhound420 avatar Sep 08 '25 21:09 zhound420

You'd think this would already be included with most projects

HolmesDomain avatar Sep 11 '25 05:09 HolmesDomain

I shared my configuration for OpenRouter here:

https://github.com/bytebot-ai/bytebot/issues/144#issuecomment-3350817420

Dylan-86 avatar Sep 30 '25 09:09 Dylan-86