Reactive-Resume
Reactive-Resume copied to clipboard
[Feature] Use Ollama
Is there an existing issue for this feature?
- [X] Yes, I have searched the existing issues and it doesn't exist.
Feature Description
It would be awesome to be able to use Ollama instead of OpenAI to integrate AI into Reactive Resume.
I'm not sure how much work this would entail but wanted to get the ball rolling
I've been exploring Ollama a lot more since you brought it to my attention, it's quite amazing what it can do, but to make an self-hosted version of Ollama to run on a server, it can get pretty expensive. But I've been using it more and more locally on my Mac, so thank you for the suggestion! :)
Awesome! Yeah it would be on a user that does self hosting to run it and provide a OLLAMA_HOST environment variable ideally but yeah it would be pretty sweet if there was some day support for it 😁
I've been exploring Ollama a lot more since you brought it to my attention, it's quite amazing what it can do, but to make an self-hosted version of Ollama to run on a server, it can get pretty expensive. But I've been using it more and more locally on my Mac, so thank you for the suggestion! :)
Man, it would be night and day for me if the feature gets released. I have my own ollama server, and having the freedom to choose the model opens up so many possibilities for the tool. I hope you keep testing it out and maybe we can see it in a near future release.
Ollama provides experimental compatibility with parts of the OpenAI API to help connect existing applications to Ollama.
If the user could set the OpenAI URL (as can already set the API-Key under the OpenAI Integration section) it would make connecting to Ollama possible. Also, another good reason for enabling the user to (optionally) set a custom OpenAI API URL is to bypass network firewalls that block OpenAI by using a proxy URL instead.
Ollama provides experimental compatibility with parts of the OpenAI API to help connect existing applications to Ollama.
If the user could set the OpenAI URL (as can already set the API-Key under the OpenAI Integration section) it would make connecting to Ollama possible. Also, another good reason for enabling the user to (optionally) set a custom OpenAI API URL is to bypass network firewalls that block OpenAI by using a proxy URL instead.
@rahidehzani great post! I'm playing around with a local branch trying this out and having good success so far, hoping to find some time soon to post a proper PR
Quick little PoC I whipped together in an hour. In the settings it should say Base URL but didn't add translations. Ideally it would let you select the model to use as well but that would require a little more work first but in the mean time like I said I'm hoping to put together a quick little PR to add this basic functionality for using your own self hosting LLM.
Screencast from 2024-07-13 09:02:54 AM.webm
CC: @AmruthPillai
Is this feature added?
I would also greatly appreciate this feature. I don't use OpenAI/ChatGPT due to the privacy and ethical concerns, and I'm already running my own Ollama instance. This would be a major improvement.
Wow this is long overdue, it took me a while today but I was able to throw together a PR that finally addresses this. Please pull my branch and QA it, especially those with OpenAI keys as I wasn't able to test this against OpenAI as I don't have a key, feel free to leave comments and I'll try to get to them as soon as possible.
PR #2073
CC @AmruthPillai
Thanks @AmruthPillai 😊 quick turn around, thanks for being on the ball 🤟