SwiftSemanticSearch
SwiftSemanticSearch copied to clipboard
Real-time on-device text-to-image and image-to-image Semantic Search with video stream capture using USearch & UForm AI Swift SDKs for Apple devices 🍏
Swift Semantic Search 🍏
This Swift demo app shows you how to build real-time native AI-powered apps for Apple devices using Unum's Swift libraries. Under the hood, it uses UForm to understand and "embed" multimodal data, like images, multilingual texts, and 🔜 videos. Once the embeddings are computed, it uses USearch to provide real-time search over the semantic space. That same engine also enables geo-spatial search over the coordinates of the images, and has been to shown to easily scale even to 100M+ entries on an iPhone 🍏
![]() |
![]() |
The demo app is capable of text-to-image and image-to-image search, can uses vmanot/Media
to fetch the camera feed, embedding and searching frames on the fly.
To test the demo:
# Clone the repo
git clone https://github.com/ashvardanian/SwiftSemanticSearch.git
# Change directory & decompress the images dataset.zip, which brings:
# - `images.names.txt` with newline-separated image names
# - `images.uform3-image-text-english-small.fbin` - precomputed embeddings
# - `images.uform3-image-text-english-small.usearch` - precomputed index
# - `images` - directory with images
cd SwiftSemanticSearch
unzip dataset.zip
After that, fire up the Xcode project and run the app on your fruity device!
Links: