dify icon indicating copy to clipboard operation
dify copied to clipboard

Feature request: ability to control `LLM` model & model parameters dynamically w/ variables

Open elliottower opened this issue 10 months ago • 3 comments

Self Checks

  • [x] I have searched for existing issues search for existing issues, including closed ones.
  • [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [x] Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

Use case: have an external API calling dify which has multiple response formats it could want, which dify can't know all of them ahead of time, and there are a large number of them. I would also like to dynamically change things like temperature, model version, etc.

Basically this allows programmatic configuration of the LLM nodes, while still gaining the benefit of streaming, error handling, all the different model providers, etc.

The workaround now is to just do manual python or javascript code to do these LLM calls and take in those variables, but that's error prone and not as observable (e.g., for logging to langfuse or other loggers)

2. Additional context or comments

Currently for the prompt templates you are allowed to do dynamic substitution using variables, but not for the model name or the arguments (which structured output / function calling is one of the model arguments)

Image

What would be ideal is if this below, could be moved into the same section as above, where you can do context substitution.

Image

3. Can you help us with this feature?

  • [ ] I am interested in contributing to this feature.

elliottower avatar Feb 25 '25 21:02 elliottower

The reason why we let you define a LLM node in the workflow is we want to offer a orchestrated application rather than let you customize everything. This can offer a stable way for the output, for example, if you use some models but doesn't exist in our platform, you will get an error instead. Also we need to move validation logic to our service API. I think we might need some special empty LLM node which can let you get the parameters. We will see what we can do after the new version.

crazywoola avatar Feb 26 '25 01:02 crazywoola

The reason why we let you define a LLM node in the workflow is we want to offer a orchestrated application rather than let you customize everything. This can offer a stable way for the output, for example, if you use some models but doesn't exist in our platform, you will get an error instead. Also we need to move validation logic to our service API. I think we might need some special empty LLM node which can let you get the parameters. We will see what we can do after the new version.

Appreciate the response. And makes sense, doing it with a code node is a decent workaround for now at least, your idea of a future empty LLM node which is fully customizable would be amazing though, as real world use cases often require specific customizations and dynamic behavior.

elliottower avatar Mar 11 '25 23:03 elliottower

Hi, @elliottower. I'm Dosu, and I'm helping the Dify team manage their backlog. I'm marking this issue as stale.

Issue Summary:

  • You requested a feature for dynamic control of LLM model parameters using variables.
  • Currently, this feature is limited to prompt templates, requiring manual coding for model names and arguments.
  • Crazywoola explained the focus on stability and suggested a future customizable empty LLM node.
  • You acknowledged the workaround and expressed enthusiasm for the proposed solution.

Next Steps:

  • Please let me know if this issue is still relevant to the latest version of the Dify repository by commenting here.
  • If there are no further updates, this issue will be automatically closed in 15 days.

Thank you for your understanding and contribution!

dosubot[bot] avatar Apr 11 '25 16:04 dosubot[bot]

@crazywoola is this feature planned?

ashishs avatar Aug 23 '25 17:08 ashishs