请支持gemini-2.5-pro-exp-03-25模型
按照现有gemini模型添加方式,反馈:404: models/gemini-2.5-pro-exp-03-25 is not found for API version v1, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods. (request id: 2025032709262059954186105299807)
用 v1beta
https://github.com/songquanpeng/one-api/issues/1977#issuecomment-2537773773
抱歉,oneapi里不知道在哪里设置v1beta
抱歉,oneapi里不知道在哪里设置v1beta
实际上不用管,你填上apikey和正确的模型名称就是可以调用了,测试错了没有关系的,不影响使用
按照现有gemini模型添加方式,反馈:404: models/gemini-2.5-pro-exp-03-25 is not found for API version v1, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods. (request id: 2025032709262059954186105299807)
用gemini opne ai 模式
按照现有gemini模型添加方式,反馈:404: models/gemini-2.5-pro-exp-03-25 is not found for API version v1, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods. (request id: 2025032709262059954186105299807)
因为默认是引用版本V1 ,open ai 兼容模式是 v1beta
Having similar issue but looks like it just need a working model.
In utils/call_llm.py I updated to
#model = os.getenv("GEMINI_MODEL", "gemini-2.5-pro-exp-03-25")
model = os.getenv("GEMINI_MODEL", "gemini-2.0-flash")
from gemini-2.5-pro-exp-03-25 to gemini-2.0-flash
then run
python utils/call_llm.py