Ollama Cloud is a Highly Scalable Cloud-native Stack for Ollama. Help us shine by turning a ★ into a ⭐!
This is not a Code repository, please read Components
Development usually kicks off on your local machine, comfy and controlled.
But moving to production? That’s a huge leap — hello, delay, inconsistency, and dependence.
Ollama Cloud steps in with some solid perks:
Consistency: If it works locally, it'll work at scale with no change to your code.
Simplicity: Can be deployed at scale with no DevOps or SRE skills.
A Cloud Platform: Empowers developers with self-service capabilities.
What you see locally is what you get in production. Basically, "Local Is Production". Maybe we should call it LIP 🫦, lol.
Notice the use of host calls over REST APIs, which significantly speeds up communication, enhances reliability, and ensures privacy.
Now you can focus on building amazing AI applications without worrying about the usual steps involved in taking them to production, such as DevOps or service dependencies. You can relax, as now it's a mere git push away!
Components
Ollama Cloud is based on the following components:
tau: The implementation of taubyte, a solution to build autonomous cloud computing platforms.
That's it! You should now have a plugin executable.
Start a Local Cloud
With the plugin built, it's time to start a local cloud. Run:
dream new multiverse
Wait until it indicates that the universe is ready.
Attach Plugin
The Cloud you've just started lacks LLM capabilities. To add these, load the plugin by running:
dream inject attach-plugin -p /path/to/plugin
If you haven't changed directories, it should be:
dream inject attach-plugin -p $(pwd)/plugin
Now, you're all set to kick ass!
What's next
Create your first application
Acknowledgement
This project stands on the shoulders of the projects mentioned in the introduction, as well as ollama itself. Please show your support by leaving a ⭐ if you find them useful.