Miguel Neves
Miguel Neves
### Feature Request Tests are running in a batch but they are not being evaluated against the GT answer. The evaluation could be done using similarity metrics, LLM or even...
### Feature Since we have the backend to compare the outputs of several prompts for the same model, the same should be supported in the UI ### Motivation Make prompt...
### Feature Request Providers like OpenAI have some rate limits (things like a limit in the requests per minute). This feature would allow llm studio to wait it out (or...
### Feature Support templates like this where the variables can be defined elsewhere so as to not make the text too messy. """ I have to understand what is being...
Allow for defining the functions for Assistants in the UI. - [ ] Clean up with Lint - [ ] Generate the JSON openai wants - [ ] Save these...
Motivation Ease of use. So that people do not need to go to OpenAI playground. Your contribution Discussion
Motivation Allows for ease use of assistants in our backend and in the future the ui. Your contribution Discussion
### Feature Request Logs of chained prompts should be grouped in order for them to be more organized. ### Motivation When using chaining prompts the logs get very confusing as...