Heat
Heat copied to clipboard
An LLM agnostic desktop and mobile client.
Heat
More people need to experience open source LLMs. Heat is an open source native iOS and macOS client for interacting with the most popular LLM services. A sister project, Swift GenKit, attempts to abstract away all the differences across each service including OpenAI, Mistral, Perplexity, Anthropic and all the models available with Ollama which you can run locally.
TestFlight
https://testflight.apple.com/join/AX9JftGk
Basic Instructions
- Build and run
- Navigate to Preferences > Services and provide an access token for services you want to use
- Choose which model you want to use and be sure the service is selected on the main Preferences screen.
Ollama Instructions
To run the iOS app on your device you'll need to figure out what the local IP is for your server. It's usually something like 10.0.0.XXX. Under Preferences > Services > Ollama you can set the IP as long as you stay on your local network. You could conceivably run this on a server somewhere else and access it over any network but I haven't tested that. Sometimes Ollama's default port 11434 doesn't work and you'll need to change it to something like 8080 and run the server manually: OLLAMA_HOST=0.0.0.0:8080 ollama serve
Future
Originally the plan for this project was to get models running on-device — hence the name Heat because your device will heat up! — but that was hard. As this becomes more feasible I will revisit.