gpt_academic icon indicating copy to clipboard operation
gpt_academic copied to clipboard

[Feature]: openrouter api如何配置

Open GiantQuan opened this issue 11 months ago • 3 comments

Class | 类型

None

Feature Request | 功能请求

请问openrouter api如何配置?

GiantQuan avatar Jan 28 '25 02:01 GiantQuan

同问,我是这样配置的

config_private.py做了如下改动:

API_KEY = "sk-or-v1-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"
CUSTOM_API_KEY_PATTERN = "sk-or-v1-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"
API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://openrouter.ai/api/v1/chat/completions"}
AVAIL_LLM_MODELS = ["openrouter-google/gemini-2.0-flash-thinking-exp:free", "openrouter-deepseek/deepseek-r1:free"]

但无法进行对话:

Image

按照openrouter官网的模型名称进行配置的:

  const completion = await openai.chat.completions.create({
    model: "google/gemini-2.0-flash-thinking-exp:free",
    messages: [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What's in this image?"
          }

doctorgreen4721 avatar Jan 29 '25 11:01 doctorgreen4721

@doctorgreen4721 https://github.com/binary-husky/gpt_academic/issues/2120#issuecomment-2621428909

openrouter api官方文档显示 error 400: Bad Request (invalid or missing params, CORS) 。

基本就是请求头有问题。

看源码发现bridge_openrouter.py中的post参数有问题。代码post的model竟然是'model': ('anthropic/claude-3.5-sonnet', 4096),正常应该为'model': 'anthropic/claude-3.5-sonnet'。这里面4096可以定位到方法read_one_api_model_name,发现是max_token,max_token应该是独立的一个参数。

定位bridge_openrouter.py里的generate_payload:

    if llm_kwargs['llm_model'].startswith('api2d-'):
        model = llm_kwargs['llm_model'][len('api2d-'):]
    if llm_kwargs['llm_model'].startswith('one-api-'):
        model = llm_kwargs['llm_model'][len('one-api-'):]
        model, _ = read_one_api_model_name(model)
    if llm_kwargs['llm_model'].startswith('vllm-'):
        model = llm_kwargs['llm_model'][len('vllm-'):]
        model, _ = read_one_api_model_name(model)
    if llm_kwargs['llm_model'].startswith('openrouter-'):
        model = llm_kwargs['llm_model'][len('openrouter-'):]
        model= read_one_api_model_name(model)

可见关于openrouter的语句model= read_one_api_model_name(model)应该像其他的一样改为model, _ = read_one_api_model_name(model)。 如果想限制max_token,可以model, max_token= read_one_api_model_name(model),并在下面的payload里传参max_token。

sinsctrlme avatar Feb 05 '25 17:02 sinsctrlme

@doctorgreen4721 你好,使用相似的配置在docker提示未找到api_key,总是需要临时性填入

environment:
      API_KEY: 'sk-or-v1-XXX'
      CUSTOM_API_KEY_PATTERN: 'sk-or-v1-XXXX'
      LLM_MODEL: 'qwen-turbo'
      AVAIL_LLM_MODELS: '["dashscope-qwen3-235b-a22b", "qwen-turbo", "gemini-2.0-flash", "qwen-max-latest", "openrouter-google/gemini-2.0-flash-exp:free", "openrouter-deepseek/deepseek-r1-0528:free"]'
      API_URL_REDIRECT: '{"https://api.openai.com/v1/chat/completions":"https://openrouter.ai/api/v1/chat/completions"}'`

请问还需要调整哪里

ZHDTZ avatar Jun 07 '25 02:06 ZHDTZ