gitbutler icon indicating copy to clipboard operation
gitbutler copied to clipboard

feat: Add support for Ollama

Open estib-vega opened this issue 1 year ago • 5 comments

Added some rudimentary support for configuring GitButler to use Ollama as a AI generation endpoint.

Also:

  • Add the $lib to the allow-listed custom top-level directories
  • Typo fix: assisstant -> assistant
  • Create a errors utility module
  • Expand AI client interface: Optional branch name generator specialized method

estib-vega avatar May 09 '24 19:05 estib-vega

Hi there 👋

First PR. I saw in Discord that you might not yet want to bake Ollama into the app yet as is a bit limited and slow, but I wanted a nice first task to dive into the codebase and had been thinking about this for a while.

It's a bit rudimentary and might be missing some things, but I'm happy to work on any feedback you may have or just have it sit here for other people to experiment with/expand upon.

estib-vega avatar May 09 '24 19:05 estib-vega

Added a little fix to the way the optional dedicated method branchName

estib-vega avatar May 09 '24 19:05 estib-vega

I tried codegemma:instruct with small commits and it works relatively well. This of course needs to be tested more thoroughly and will not be deterministic at all, but I think is a nice enough experimental feature to have

estib-vega avatar May 09 '24 19:05 estib-vega

This is how the simple settings would look like. To consider would be to add more optional parameters to set there, e.g. temperature.

settings

And this is the commit generation unedited (and the branch name generation in the background). I'm running it in my M2 MacBook Air 16 GB and with a smaller commit it takes between 5 to 7 seconds to generate good enough commit messages commit-generation

estib-vega avatar May 09 '24 20:05 estib-vega

tst

RameshkumarBaskaran avatar May 10 '24 11:05 RameshkumarBaskaran

This is pretty awesome. I've been wanting to implement this for a while, since I've been playing with ollama too. I'll try it out.

schacon avatar May 11 '24 07:05 schacon

Hey @estib-vega this is pretty cool, and over all a great big thanks for the contribution.

I feel like there are a few areas of improvement:

AIClient and AIService concerns

You've introduced some branch and commit specific code into the AIClient. We've tried to keep the AIClients as some quite thin, general purpose abstractions over the clients. Ideally, evaluate should be able to be given any prompt you could ever ask a LLM and it should handle it for you.

Currently the AIService is responsible for orchestrating all the clients, and providing the appropriate prompts for different topics. This is in part to help our future goal of being able to allow the end-user to enter their own prompts, and to achieve that we're going to need our prompt construction to all be centralised in one place.

Inclusion of pnpm-lock and .eslintrc changes

I totally get that when setting up a new project there is sometimes some settings you need to tweak, but it would be great if you could ignore those changes locally, or move them into their own vbranch so you don't commit them.

If its helpful, my vs-code workspace settings are as follows:

{
  "eslint.validate": ["svelte"],
  "eslint.workingDirectories": ["app"],
  "editor.formatOnSave": true,
  "prettier.configPath": "app/.prettierrc",
  "prettier.useEditorConfig": false,
  "svelte.enable-ts-plugin": true,
  "svelte.plugin.svelte.defaultScriptLanguage": "ts",
  "editor.defaultFormatter": "esbenp.prettier-vscode",
  "typescript.preferences.importModuleSpecifier": "non-relative",
  "javascript.preferences.importModuleSpecifier": "non-relative",
  "[rust]": {
    "editor.defaultFormatter": "rust-lang.rust-analyzer"
  },
  "[typescript]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
  "[json]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
}

Code style

There were a few minor details I noticed that aren't encoded in eslint/tslint rules

  • We prefer == over ===
  • Always prefer async/await over .then()
  • Prefer non-relative imports; $lib/ai/ollamaClient over ./ollamaClient

I also noticed a few functions which were direct aliases for throw new Error() and JSON.stringify. There is occasionally a time and a place for such things, but I'm not sure they are particularly helpful in terms of code clarity in this case.


I've gone and made a few changes in a separate branch if you want to take a look: https://github.com/gitbutlerapp/gitbutler/tree/ollama-support-suggestions

You'll notice I've changed over to using only our regular commit and branch template. When using llama3:8b at least, it seems to generate responses in line with what we get from openAI and anthropic.

Caleb-T-Owens avatar May 11 '24 13:05 Caleb-T-Owens

Hey @Caleb-T-Owens thank you for taking the time to give me detailed feedback. It's really helpful!

Here's just some thoughts I have:

Inclusion of pnpm-lock and .eslintrc changes

pnpm-lock You're 100% right 😅. It felt bad after I committed it haha. I just was confused about it not being "gitignored" and didn't know if I was supposed to commit it or not.

Thanks for the vscode config file! That helped the linter to calm down haha.

AIClient and AIService concerns

Ideally, evaluate should be able to be given any prompt you could ever ask a LLM and it should handle it for you.

I see. I agree (specially for the commit message generation) what I implemented is a bit of a over-engineered approach, but I'd still argue that is something that is still very valuable (maybe necessary) in certain cases, when using smaller models.

Disclaimer: This next part is based only on empirical data. If you still feel strongly against my approach, I don't mind taking it out 😊

I've noticed that it's a bit harder to get the local, smaller models to follow given patterns of response without some extra guard-rails, like in the case for the branch name generation.

I based my approach from this video from Matt Williams (Ollama) in which he demonstrates how to increase the consistency of the outputs in the desired format by meta-prompting it a bit.

This is how the name generation looks like when trying to generate the branch name with no meta-prompting (using your branch with llama3:8b). Notice the backticks. raw

And this is with the layer of the meta-prompting on top (same model). meta-prompting

This is in part to help our future goal of being able to allow the end-user to enter their own prompts, and to achieve that we're going to need our prompt construction to all be centralised in one place.

I don't think one cancels out the other, necessarily. In my opinion, there is nothing stopping the user from constructing their own prompts. The prompts themselves are still being passed down to the models unchanged, just with some extra meta-sprinkles on top. Plus, probably the user would need to be able to specify different prompts for different actions.

In any case, as I said before, I you feel strongly against it still, I get that 👌.

Code style

We prefer == over ===

😮. Interesting, why is that?

Always prefer async/await over .then()

Fair enough 👍

Prefer non-relative imports; $lib/ai/ollamaClient over ./ollamaClient

Fair enough 👍

I also noticed a few functions which were direct aliases for throw new Error() and JSON.stringify. There is occasionally a time and a place for such things, but I'm not sure they are particularly helpful in terms of code clarity in this case.

Fair enough 👍

I'll make some more changes on top to address your feedback, and please let me know what you think about my comments 😊

estib-vega avatar May 14 '24 09:05 estib-vega

I've noticed that it's a bit harder to get the local, smaller models to follow given patterns of response without some extra guard-rails, like in the case for the branch name generation.

I see, very interesting! Those back ticks are certainly not ideal.

I had a go asking it to write some poetry which it did pretty well with the extra Assistant and User messages in the context, but I was still feeling a little uneasy about having a second branchName method, so I had a go at passing a list of PromptMessages to the evaluate method, and letting each model specify what the defaults are instead:

https://github.com/gitbutlerapp/gitbutler/compare/master...ollama-support-suggestions (I also added a commit to fix the shuffle function)

What do you think?

Caleb-T-Owens avatar May 15 '24 07:05 Caleb-T-Owens

I had a go asking it to write some poetry

Haha cool! :)

I was still feeling a little uneasy about having a second branchName method

I like that approach 👍. I think is a good place in the middle to not change a lot the signature but still keep the guardrails in place. Also, it's cleaner the way you moved it around, I feel.

I'm good with this if you are :)

estib-vega avatar May 15 '24 15:05 estib-vega

I'm happy with the code.

@PavelLaptev, I just wanted to check you were happy with the UI. The "Coming soon" has been taken out and replaced with the Ollama option.

Screenshot 2024-05-15 at 19 56 27

Caleb-T-Owens avatar May 15 '24 18:05 Caleb-T-Owens

@Caleb-T-Owens looks good! I like the emoji 🦙 Thank you @estib-vega! 🙂

PavelLaptev avatar May 16 '24 00:05 PavelLaptev