gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

ChatGPT Plugin Functionality

Open Icemaster-Eric opened this issue 1 year ago • 6 comments

Describe your changes

Added chatgpt style plugin functionality to the python bindings for GPT4All. The existing codebase has not been modified much. The only changes to gpt4all.py is the addition of a plugins parameter in the GPT4All class that takes an iterable of strings, and registers each plugin url and generates the final plugin instructions. I've added the plugin functions in a new plugins.py file, along with some tests in the tests folder.

Issue ticket number and link

#1391

Checklist before requesting a review

  • [x] I have performed a self-review of my code.
  • [x] If it is a core feature, I have added thorough tests.
  • [x] I have added thorough documentation for my code.
  • [x] I have tagged PR with relevant project labels. I acknowledge that a PR without labels may be dismissed.
  • [ ] If this PR addresses a bug, I have provided both a screenshot/video of the original bug and the working solution.

Demo

plugin-demo

Installation / Usage

Build this repo from source locally using the build instructions.

Usage is very simple, by adding the model.plugin_instructions string to the prompt. Give the model output to the model.get_plugin_response function, which will return a string containing the plugin response details. Finally, you can give the plugin response to the model in a prompt again to get the final output.

Plugin URLs should usually end with /.well-known/ai-plugin.json, as that's the path which contains the information for the plugin.

Example Code

from gpt4all import GPT4All

model = GPT4All(
    "GPT4All-13B-snoozy.ggmlv3.q4_1.bin",
    plugins=("https://chatgpt-plugins.replit.app/openapi/weather-plugin",)
)

while True:
    prompt = input("\nPrompt: ")

    output = ""

    for token in model.generate(f"""### System:
{model.plugin_instructions}

### Human:
{prompt}

### Assistant:
""", temp=0, max_tokens=256, streaming=True):
        print(token, end="", flush=True)
        output += token

    print()

    plugin_response = model.get_plugin_response(output)

    for token in model.generate(f"""### System:
{plugin_response}

### Human:
{prompt}

### Assistant:
""", temp=0, max_tokens=128, streaming=True):
        print(token, end="", flush=True)

Notes

This does not work for chatgpt restricted plugins. Currently, the 13b GPT4All-snoozy can generate plugin calls correctly. Most similar sized models should also be able to do the same. However, the complexity of the returned data has a big impact on the final response. It is recommended to trim the information returned by plugins to the minimum required if possible.

Icemaster-Eric avatar Sep 14 '23 03:09 Icemaster-Eric

What's this going to take to get this review. Me Need...

posix4e avatar Sep 16 '23 08:09 posix4e

Sorry, but I don't think it's me who should be reviewing this.

cosmic-snow avatar Sep 17 '23 00:09 cosmic-snow

Taking a look @Icemaster-Eric . I'm really excited about supporting plugin calling against open ai's plugin sepc but need to think about edge cases

  1. prompt templates differ by model (you can access the right one in the attributes of the GPT4All object) but this code currently uses the System/Response template.
  2. Plugins inherently require internet access to run the model. It should be the case that no network calls are made when plugins are not being used.

I'll be giving this is a deeper review this week and floating it around in the discord for discusion.

AndriyMulyar avatar Sep 17 '23 23:09 AndriyMulyar

Thanks for the suggestions!

  1. Will update code accordingly, thanks.
  2. If no plugins are provided in the plugins parameter, it won't send any network calls. However, letting the model decide whether it needs to use a plugin or not currently relies on having the plugin spec in the request. I believe this shouldn't be an issue, but maybe adding a feature that "turns on"/"turns off" the plugin functionality could be useful as to not waste context.

Icemaster-Eric avatar Sep 18 '23 00:09 Icemaster-Eric

.

ngtanthanh avatar Nov 01 '23 09:11 ngtanthanh

Any updates here? Lack of browsing and plugin support is the main thing holding me to ChatGPT for now. Adding this here would be huge.

christianjuth avatar Jan 21 '24 21:01 christianjuth

It seems like this PR will never be finished. I prefer the approach in https://github.com/ggerganov/llama.cpp/pull/5695 which relies on a model-specific template to actually include function calls in the middle of a conversation instead of as the sole response of the model.

cebtenzzre avatar May 05 '24 03:05 cebtenzzre