GDevelop-extensions
GDevelop-extensions copied to clipboard
Ollama AI
Description
This extension adds the functionality to your project to easily send requests to the "Ollama" AI, and get responses from it.
How to host your own server
- Go to https://ollama.com/download
- Choose your platform
- Install the way proper to your operating system
- Open your command prompt and type in the following:
ollama pull llama3(Note: Althoughllama3is the newest version and I recommend you to use this one, you can select any other model offered on the Ollama website, just switch thellama3to any other model name they listed there: https://ollama.com/library) - After the installation is done, you can start running the server and use the model with
ollama run llama3(Note: Again, you can use any model name here that you have installed instead ofllama3)
If you are stuck, NetworkChuck has a really cool video explaining everything you have to know for hosting your own server, I also followed this video to get mine working: https://youtu.be/Wjrdr0NU4Sk?t=182
How to customize models
You can read the official documentation for this on the official Ollama GitHub repo: https://github.com/ollama/ollama?tab=readme-ov-file#customize-a-prompt
How to use the extension
Create a simple action to send the following data to a Ollama AI server:
- URL (The server's URL with port)
- Model (The model you want to generate the response)
- Prompt (The prompt you send to the server to reply to)
Here is an example action I also used in the example project
[!CAUTION] The speed of the response generation depends on the hardware the server is hosted on.
Checklist
- [X] I've followed all of the best practices.
- [X] I confirm that this extension can be integrated to this GitHub repository, distributed and MIT licensed.
- [X] I am aware that the extension may be updated by anyone, and do not need my explicit consent to do so.
What tier of review do you aim for your extension?
Reviewed