julep
julep copied to clipboard
[Feature]: Support python expressions for model settings in `prompt` step
trafficstars
🔖 Feature description
Currently the following task fails.
main:
- prompt:
- role: user
content: Hello
settings:
model: $ "gpt-4o-mini"
unwrap: true
Error transition logs:
type: error
output: BadRequestError: litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': {'error': "/chat/completions: Invalid model name passed in model=$ 'gpt-4o-mini'. Call `/v1/models` to view available models for your key."}, 'type': 'None', 'param': 'None', 'code': '400'}}
🎤 Why is this feature needed ?
No response
✌️ How do you aim to achieve this?
No response
👀 Have you searched issues and PRs to see if this feature request has been raised before?
- [x] I checked and didn't find similar issue