fanyi icon indicating copy to clipboard operation
fanyi copied to clipboard

[feature] 支持免费模型 glm-4-flash

Open w1475963 opened this issue 10 months ago • 5 comments

w1475963 avatar Mar 14 '25 20:03 w1475963

Welcome to GitAuto! 🎉 Click the checkbox below to generate a PR!

  • [x] Generate PR

@w1475963, You have 3 requests left in this cycle which refreshes on 2025-04-06 01:47:00+00:00. If you have any questions or concerns, please contact us at [email protected].

gitauto-ai[bot] avatar Mar 14 '25 20:03 gitauto-ai[bot]

感谢您的建议!关于支持免费模型 glm-4-flash 的功能请求,我会将其记录下来并提交给团队进行考虑。请耐心等待后续的更新。

🪧 Tips For further assistance, please describe your question in the comments and @petercat-assistant to start a conversation with me.

petercat-assistant[bot] avatar Mar 14 '25 20:03 petercat-assistant[bot]

你发给我的邮件已经收到了~辛苦了~

tsejx avatar Mar 14 '25 20:03 tsejx

▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░░ 65% Got your request. Extracted metadata. Checked request limit. 3 requests left. Found 19 files across 3 directory levels. Found 1 configuration files.

  • package.json Read 1 configuration files. Found 2 issue comments. Created a remote branch... Calling search_remote_file_contents() with {'query': 'model'}... Read lib/config.mjs... Read . Searched Google for glm-4-flash model API integration and went through the results. Modified index.mjs and committed the changes. Calling commit_changes_to_remote_branch() with {'file_path': 'index.mjs', 'commit_message': 'feat: add support for glm-4-flash model', 'diff': "```diff\n@@ -23,7 +23,7 @@ const gradients = [\n \n export default async (word, options) => {\n console.log('');\n- const { iciba, deepseek, LLM_API_KEY } = options;\n+ const { iciba, deepseek, glm, LLM_API_KEY } = options;\n const endcodedWord = encodeURIComponent(word);\n \n // iciba\n@@ -119,6 +119,42 @@ export default async (word, options) => {\n }\n }\n \n+ // glm-4-flash\n+ if (isTrueOrUndefined(glm)) {\n+ const openai = new OpenAI({\n+ baseURL: 'https://api.zhipuai.cn/v1',\n+ apiKey: LLM_API_KEY || '',\n+ });\n+\n+ const model = 'glm-4-flash';\n+\n+ const spinner = ora(正在请教 ${model}...).start();\n+ try {\n+ const chatCompletion = await openai.chat.completions.create({\n+ messages: [\n+ {\n+ role: 'system',\n+ content: 你是一本专业的中英文双语词典。请按照以下要求提供翻译和解释:\n+\n+1. 格式要求:\n+ [原词] [音标] ~ [翻译] [拼音] ~ [EMOJI],\n+ },\n+ {\n+ role: 'user',\n+ content: 请翻译:${word},\n+ },\n+ ],\n+ model,\n+ temperature: 1.3,\n+ });\n+ spinner.stop();\n+ const randomGradient = gradients[Math.floor(Math.random() * gradients.length)];\n+ console.log(gradient[randomGradient](chatCompletion.choices[0].message.content));\n+ } catch (error) {\n+ spinner.fail(访问 ${model} 失败,请检查网络或 API 密钥);\n+ }\n+ }\n+\n function isTrueOrUndefined(val) {\n return val === true || val === undefined;\n }\n```"}...

gitauto-ai[bot] avatar Mar 17 '25 03:03 gitauto-ai[bot]

I am implementing this feature in #175

GowayLee avatar May 13 '25 05:05 GowayLee

Hi, @w1475963. I'm Dosu, and I'm helping the fanyi team manage their backlog and am marking this issue as stale.

Issue Summary:

  • You requested adding support for the free model glm-4-flash.
  • The bot petercat-assistant acknowledged the request for consideration.
  • An automated code update by gitauto-ai integrated glm-4-flash support in index.mjs.
  • User GowayLee is implementing this feature in PR #175.
  • The issue is effectively resolved with the ongoing PR.

Next Steps:

  • Please confirm if this issue is still relevant with the latest version of fanyi.
  • If it is, feel free to keep the discussion open by commenting; otherwise, I will close this issue automatically in 7 days.

Thank you for your understanding and contribution!

dosubot[bot] avatar Aug 12 '25 16:08 dosubot[bot]