open-webui icon indicating copy to clipboard operation
open-webui copied to clipboard

feat: function calling

Open not-nullptr opened this issue 9 months ago • 19 comments

Pull Request Checklist

  • [x] Target branch: Pull requests should target the dev branch.
  • [x] Description: Briefly describe the changes in this pull request.
  • [x] Changelog: Ensure a changelog entry following the format of Keep a Changelog is added at the bottom of the PR description.
  • [ ] Documentation: Have you updated relevant documentation Open WebUI Docs, or other documentation sources?
  • [ ] Dependencies: Are there any new dependencies? Have you updated the dependency versions in the documentation?
  • [ ] Testing: Have you written and run sufficient tests for the changes?
  • [x] Code Review: Have you self-reviewed your code and addressed any coding standard issues?

Description

i've implemented function calling, since this is one of the last "big" features relating to LLMs that open-webui has yet to have implemented.


Changelog Entry

Added

  • function calling (incl. settings menu)
  • in-browser monaco editor for typescript function editing

Fixed

(n/a)

Changed

(n/a)

Removed

(n/a)

Security

(n/a)

Breaking Changes

(n/a)


Additional Information

this PR also adds monaco as a dependency, since functions are written in-the-browser with first-class typescript support. i looked into writing my own editor - this is not an option (for me, at least)

functions are written to localStorage, this is because i don't know python and i'd rather not write poor code for the backend .

there's still some work to be done (such as adding validation for parameter names - they can only include values which a javascript variable could include) but its functional.

demo: https://github.com/open-webui/open-webui/assets/62841684/70bcdd8c-d887-43f7-8f7b-54b5ac1dab31

not-nullptr avatar May 11 '24 01:05 not-nullptr

Hi there thanks for the nice work, I can indeed make a function, and using phi3 it was rather easy to make it callit, but seams the code-editing feature doesn't save my changes. I was able trough the development console to retrieve, modify and set back my modified code I was then able to get my function called as you can see here. image

My function should have returned a dummy value to the LLM like : image But it seams that is not send back to the LLM and he doesn't take the response into account, I'm probably doing it wrong on that part.

But again, great work the settings page etc are cool, the storage should be server-side, but hey that's the next step 👍 good work, I easly see how you can integrate that with other tools/api's. Maybe a sort of template could be proposed like 'http' request to send the request 'as-is' somewhere else, ala webhooks or whatever other api the user has setup with streamlit, fastapi... whatnot

sebdg avatar May 12 '24 05:05 sebdg

@sebdg thanks for the response! you need to press "save and go back" or ctrl+s in order for your code to actually be saved. this is a remnent from my old project and im sure i could make it autosave on key press :)

also it seems your code has an error - at runtime it would not give the LLM a response because you have not defined params1 and params2 (you need to add them to the function's parameters - this is just typescript with all the bells and whistles)

if you add those, it'll work (but you may get weird results logged since "param1" and "param2" are nondescript names)

not-nullptr avatar May 12 '24 10:05 not-nullptr

Amazing stuff, I'll review in a bit 🙌

tjbck avatar May 13 '24 21:05 tjbck

Amazing stuff, I'll review in a bit 🙌

not sure how ready this thing is but i super appreciate it. if you reckon its ready enough then merging is up to you :)

not-nullptr avatar May 13 '24 21:05 not-nullptr

There might be room for fixes based on this build log of mine: PRBuildLog.txt

silentoplayz avatar May 16 '24 14:05 silentoplayz

There might be room for fixes based on this build log of mine: PRBuildLog.txt

im unsure what linting (i assume?) you guys are using, but it seems to be detecting problems in the monaco module, which it shouldn't do. this isn't an issue with my code

not-nullptr avatar May 16 '24 14:05 not-nullptr

@not-nullptr this is excellent! My feedback after using this is that I wish there was a way to render the response from the API in open-webui directly, without passing it through the model first. For example, I have a function that fetches some JSON and formats a response as markdown.

audy avatar May 16 '24 15:05 audy

thanks! though at that point, wouldn't you just want to use a separate program or something? seems a bit unnecessary to run it through open webui

not-nullptr avatar May 16 '24 15:05 not-nullptr

thanks! though at that point, wouldn't you just want to use a separate program or something? seems a bit unnecessary to run it through open webui

Having the model is nice for figuring out which endpoint to call and mapping the params but I found that passing the response back through the model resulted in weird behavior (like Phi3 telling me that "a chance of Llamas" is not a real type of weather). Maybe this can be fixed with prompt engineering though so not really in scope?

audy avatar May 16 '24 15:05 audy

i feel that's more of a prompt engineering issue, like you said. wrangling smaller models to output what you want can be tricky though, i get what you mean...

not-nullptr avatar May 16 '24 15:05 not-nullptr

There might be room for fixes based on this build log of mine: PRBuildLog.txt

im unsure what linting (i assume?) you guys are using, but it seems to be detecting problems in the monaco module, which it shouldn't do. this isn't an issue with my code

Building your PR took 30-35 seconds longer than building the dev branch itself with the latest commits, and doing so also throws a few warnings related to use of eval in specific file paths being strongly discouraged, as it poses security risks and may cause issues with minification. Both of what I mentioned aren't issues outside of this PR, even if the latter are simply linter warnings. I mean no harsh feelings, but this is just my two cents. My intention is not to discourage, but rather to provide constructive feedback. I am personally hoping for function calling to make its way into Open WebUI myself. I simply hope that any issues with the PR could be addressed before it is reviewed over before the possibility of having your PR merged.

silentoplayz avatar May 17 '24 10:05 silentoplayz

eval is used in order to actually execute the function ran. all code executed is written by the user so it's not unsafe. the higher build times are probably due to monaco being added as a dependency, as it isn't very light. this current method is the only way to get typescript intellisense in the browser afaik

not-nullptr avatar May 17 '24 10:05 not-nullptr

Does it not work with openAI api, or am I doing something wrong?

Stargate256 avatar May 18 '24 11:05 Stargate256

Does it not work with openAI api, or am I doing something wrong?

nope, custom method. apologies (though this is a pretty good idea)

not-nullptr avatar May 20 '24 14:05 not-nullptr

As much as I want to see function calling added quickly, since everyone else uses the openAI API standard it makes sense that it should be implemented here. If it's added later it'll break a bunch of things and the project might be forced to support a deprecated API for a while to come. This way you also gain compatibility with all the programs already out there as well.

hchasens avatar May 20 '24 18:05 hchasens

As much as I want to see function calling added quickly, since everyone else uses the openAI API standard it makes sense that it should be implemented here. If it's added later it'll break a bunch of things and the project might be forced to support a deprecated API for a while to come. This way you also gain compatibility with all the programs already out there as well.

all function calling in this PR occurs on the client.

not-nullptr avatar May 20 '24 19:05 not-nullptr

I agree with @not-nullptr, I'll be taking a look in a bit to try to have this merged for our next release.

We can always add OpenAI API compatible function calling feature later, and have the best of the both worlds.

tjbck avatar May 20 '24 19:05 tjbck

mega appreciate it

not-nullptr avatar May 20 '24 19:05 not-nullptr

all function calling in this PR occurs on the client.

Apologies, I didn't notice. In that case I'd love to see this added!

hchasens avatar May 23 '24 18:05 hchasens

I'll be closing this in favour of Pipelines Plugin function calling support, but I might cherrypick some changes here to support browser-end JS function calling later down the line. Thank you for all your hard work, @not-nullptr! I've added you as a co-author for v0.2.0 in recognition of your inspiring contributions :)

https://github.com/open-webui/open-webui/pull/798#issuecomment-2143593842

https://github.com/open-webui/pipelines/blob/main/examples/function_calling/function_calling_filter_pipeline.py image

tjbck avatar Jun 02 '24 01:06 tjbck

Pipeline is a more robust system, but I love the feature of having a web-based editor. In future, could we perhaps we can get a frontend for pipelines (with an editor), either as part of pipeline deployment or as part of admin settings in webui? It'd streamline the testing and deployment!

hchasens avatar Jun 06 '24 22:06 hchasens

Pipeline is a more robust system, but I love the feature of having a web-based editor. In future, could we perhaps we can get a frontend for pipelines (with an editor), either as part of pipeline deployment or as part of admin settings in webui? It'd streamline the testing and deployment!

i am more than happy to implement everything else if someone can get a code editor with completion working in the browser for python.

not-nullptr avatar Jun 06 '24 22:06 not-nullptr

Coming soon! https://github.com/open-webui/open-webui/issues/2825

I'll let you know where we could use some help once I setup the scaffolding! (perhaps the reintroduction of js function calling!)

tjbck avatar Jun 06 '24 22:06 tjbck