open-webui
open-webui copied to clipboard
feat: function calling
Pull Request Checklist
- [x] Target branch: Pull requests should target the
dev
branch. - [x] Description: Briefly describe the changes in this pull request.
- [x] Changelog: Ensure a changelog entry following the format of Keep a Changelog is added at the bottom of the PR description.
- [ ] Documentation: Have you updated relevant documentation Open WebUI Docs, or other documentation sources?
- [ ] Dependencies: Are there any new dependencies? Have you updated the dependency versions in the documentation?
- [ ] Testing: Have you written and run sufficient tests for the changes?
- [x] Code Review: Have you self-reviewed your code and addressed any coding standard issues?
Description
i've implemented function calling, since this is one of the last "big" features relating to LLMs that open-webui has yet to have implemented.
Changelog Entry
Added
- function calling (incl. settings menu)
- in-browser monaco editor for typescript function editing
Fixed
(n/a)
Changed
(n/a)
Removed
(n/a)
Security
(n/a)
Breaking Changes
(n/a)
Additional Information
this PR also adds monaco
as a dependency, since functions are written in-the-browser with first-class typescript support. i looked into writing my own editor - this is not an option (for me, at least)
functions are written to localStorage
, this is because i don't know python and i'd rather not write poor code for the backend .
there's still some work to be done (such as adding validation for parameter names - they can only include values which a javascript variable could include) but its functional.
demo: https://github.com/open-webui/open-webui/assets/62841684/70bcdd8c-d887-43f7-8f7b-54b5ac1dab31
Hi there thanks for the nice work, I can indeed make a function, and using phi3 it was rather easy to make it callit, but seams the code-editing feature doesn't save my changes. I was able trough the development console to retrieve, modify and set back my modified code
I was then able to get my function called as you can see here.
My function should have returned a dummy value to the LLM like :
But it seams that is not send back to the LLM and he doesn't take the response into account, I'm probably doing it wrong on that part.
But again, great work the settings page etc are cool, the storage should be server-side, but hey that's the next step 👍 good work, I easly see how you can integrate that with other tools/api's. Maybe a sort of template could be proposed like 'http' request to send the request 'as-is' somewhere else, ala webhooks or whatever other api the user has setup with streamlit, fastapi... whatnot
@sebdg thanks for the response! you need to press "save and go back" or ctrl+s in order for your code to actually be saved. this is a remnent from my old project and im sure i could make it autosave on key press :)
also it seems your code has an error - at runtime it would not give the LLM a response because you have not defined params1
and params2
(you need to add them to the function's parameters - this is just typescript with all the bells and whistles)
if you add those, it'll work (but you may get weird results logged since "param1" and "param2" are nondescript names)
Amazing stuff, I'll review in a bit 🙌
Amazing stuff, I'll review in a bit 🙌
not sure how ready this thing is but i super appreciate it. if you reckon its ready enough then merging is up to you :)
There might be room for fixes based on this build log of mine: PRBuildLog.txt
There might be room for fixes based on this build log of mine: PRBuildLog.txt
im unsure what linting (i assume?) you guys are using, but it seems to be detecting problems in the monaco
module, which it shouldn't do. this isn't an issue with my code
@not-nullptr this is excellent! My feedback after using this is that I wish there was a way to render the response from the API in open-webui directly, without passing it through the model first. For example, I have a function that fetches some JSON and formats a response as markdown.
thanks! though at that point, wouldn't you just want to use a separate program or something? seems a bit unnecessary to run it through open webui
thanks! though at that point, wouldn't you just want to use a separate program or something? seems a bit unnecessary to run it through open webui
Having the model is nice for figuring out which endpoint to call and mapping the params but I found that passing the response back through the model resulted in weird behavior (like Phi3 telling me that "a chance of Llamas" is not a real type of weather). Maybe this can be fixed with prompt engineering though so not really in scope?
i feel that's more of a prompt engineering issue, like you said. wrangling smaller models to output what you want can be tricky though, i get what you mean...
There might be room for fixes based on this build log of mine: PRBuildLog.txt
im unsure what linting (i assume?) you guys are using, but it seems to be detecting problems in the
monaco
module, which it shouldn't do. this isn't an issue with my code
Building your PR took 30-35 seconds longer than building the dev
branch itself with the latest commits, and doing so also throws a few warnings related to use of eval
in specific file paths being strongly discouraged, as it poses security risks and may cause issues with minification. Both of what I mentioned aren't issues outside of this PR, even if the latter are simply linter warnings. I mean no harsh feelings, but this is just my two cents. My intention is not to discourage, but rather to provide constructive feedback. I am personally hoping for function calling to make its way into Open WebUI myself. I simply hope that any issues with the PR could be addressed before it is reviewed over before the possibility of having your PR merged.
eval is used in order to actually execute the function ran. all code executed is written by the user so it's not unsafe. the higher build times are probably due to monaco being added as a dependency, as it isn't very light. this current method is the only way to get typescript intellisense in the browser afaik
Does it not work with openAI api, or am I doing something wrong?
Does it not work with openAI api, or am I doing something wrong?
nope, custom method. apologies (though this is a pretty good idea)
As much as I want to see function calling added quickly, since everyone else uses the openAI API standard it makes sense that it should be implemented here. If it's added later it'll break a bunch of things and the project might be forced to support a deprecated API for a while to come. This way you also gain compatibility with all the programs already out there as well.
As much as I want to see function calling added quickly, since everyone else uses the openAI API standard it makes sense that it should be implemented here. If it's added later it'll break a bunch of things and the project might be forced to support a deprecated API for a while to come. This way you also gain compatibility with all the programs already out there as well.
all function calling in this PR occurs on the client.
I agree with @not-nullptr, I'll be taking a look in a bit to try to have this merged for our next release.
We can always add OpenAI API compatible function calling feature later, and have the best of the both worlds.
mega appreciate it
all function calling in this PR occurs on the client.
Apologies, I didn't notice. In that case I'd love to see this added!
I'll be closing this in favour of Pipelines Plugin function calling support, but I might cherrypick some changes here to support browser-end JS function calling later down the line. Thank you for all your hard work, @not-nullptr! I've added you as a co-author for v0.2.0 in recognition of your inspiring contributions :)
https://github.com/open-webui/open-webui/pull/798#issuecomment-2143593842
https://github.com/open-webui/pipelines/blob/main/examples/function_calling/function_calling_filter_pipeline.py
Pipeline is a more robust system, but I love the feature of having a web-based editor. In future, could we perhaps we can get a frontend for pipelines (with an editor), either as part of pipeline deployment or as part of admin settings in webui? It'd streamline the testing and deployment!
Pipeline is a more robust system, but I love the feature of having a web-based editor. In future, could we perhaps we can get a frontend for pipelines (with an editor), either as part of pipeline deployment or as part of admin settings in webui? It'd streamline the testing and deployment!
i am more than happy to implement everything else if someone can get a code editor with completion working in the browser for python.
Coming soon! https://github.com/open-webui/open-webui/issues/2825
I'll let you know where we could use some help once I setup the scaffolding! (perhaps the reintroduction of js function calling!)