Frank/ollama module
What does this PR do?
Added a new Module project that uses OLLAMA docker images to give local LLM capabilities. This only supports CPU workload, but adding the env. variables to activate GPU if the prerequisites are there, can be done and so CUDA core based processing isn't precluded
Why is it important?
This module is very powerful and can give a lot of value to those that are incorporating OLLAMA models in their projects
Related issues
Link related issues below. Insert the issue link or reference after the word "Closes" if merging this should automatically close it.
- Closes #1097
How to test this PR
Run the test added
Follow-ups
- Make more docs to guide the user to be able to use GPU might be a future thing, but right now, no need.
Deploy Preview for testcontainers-dotnet ready!
| Name | Link |
|---|---|
| Latest commit | 6a0f09628bb55d67525189bb257a6688078497eb |
| Latest deploy log | https://app.netlify.com/sites/testcontainers-dotnet/deploys/65d9b5a671986e0008a6dd03 |
| Deploy Preview | https://deploy-preview-1099--testcontainers-dotnet.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
@HofmeisterAn I'm updating the branch with my current state as the tests were green on local, but I think it needs another pass. No Rush, I'm busy with my day-job so, no need for you to rush 😄