instill-core
instill-core copied to clipboard
🔮 Instill Core is a full-stack AI infrastructure tool for data, model and pipeline orchestration, designed to streamline every aspect of building versatile AI-first applications
Doc | Website | Community | Blog
Visual Data Preparation (VDP) 
Visual Data Preparation (VDP) is an open-source visual data ETL tool to streamline the end-to-end visual data processing pipeline:
-
Extract unstructured visual data from pre-built data sources such as cloud/on-prem storage, or IoT devices
-
Transform it into analysable structured data by Vision AI models
-
Load the transformed data into warehouses, applications, or other destinations
Highlights
-
🚀 The fastest way to build end-to-end visual data pipelines - building a pipeline is like assembling LEGO blocks
-
⚡️ High-performing backends implemented in Go with Triton Inference Server for unleashing the full power of NVIDIA GPU architecture (e.g., concurrency, scheduler, batcher) supporting TensorRT, PyTorch, TensorFlow, ONNX, Python and more.
-
🖱️ One-click import & deploy ML/DL models from GitHub, Hugging Face or cloud storage managed by version control tools like DVC or ArtiVC
-
📦 Standardised CV Task structured output formats to streamline with data warehouse
-
🔌 Pre-built ETL data connectors for extensive data access integrated with Airbyte
-
🪢 Build pipelines for diverse scenarios - SYNC mode for real-time inference and ASYNC mode for on-demand workload
-
🧁 Scalable API-first microservice design for great developer experience - seamless integration to modern data stack at any scale
-
🤠 Build for every Vision AI and Data practitioner - The no-/low-code interface helps take off your AI Researcher/AI Engineer/Data Engineer/Data Scientist hat and put on the all-rounder hat to deliver more with VDP
Online demos
An online demo VDP instance has been provisioned, in which you can directly play around the basic features in its Console via https://demo.instill.tech and the API (e.g., https://demo.instill.tech/v1alpha/pipelines).
A number of applications that you can possibly use VDP to quickly achieve are showcased below:
Want to showcase your ML/DL models? We offer fully-managed VDP on Instill Cloud. Please sign up the form and we will reach out to you.
Prerequisites
-
macOS or Linux - VDP works on macOS or Linux, but does not support Windows yet.
-
Docker and Docker Compose - VDP uses Docker Compose (compose file version:
3.9
) to run all services at local. Please install Docker and Docker Compose before using VDP.
Quick start
Execute the following commands to start pre-built images with all the dependencies:
$ git clone https://github.com/instill-ai/vdp.git && cd vdp
# Launch all services
$ make all
🚀 That's it! Once all the services are up with health status, the UI is ready to go at http://localhost:3000!
Jump right in
Note
The image of model-backend (~2GB) and Triton Inference Server (~11GB) can take a while to pull, but this should be an one-time effort at the first setup.
Shut down VDP
To shut down all running services:
$ make down
Guidance philosophy
VDP is built with open heart and we expect VDP to be exposed to more MLOps integrations. It is implemented with microservice and API-first design principle. Instead of building all components from scratch, we've decided to adopt sophisticated open-source tools:
-
Triton Inference Server for high-performance model serving
-
Temporal for a reliable, durable and scalable workflow engine
-
Airbyte for abundant destination connectors
We hope VDP can also enrich the open-source communities in a way to bring more practical use cases in unstructured visual data processing.
Documentation
📔 Documentation
Check out the documentation & tutorials to learn VDP!
📘 API Reference
The gRPC protocols in protobufs provide the single source of truth for the VDP APIs. The genuine protobuf documentation can be found in our Buf Scheme Registry (BSR).
For the OpenAPI documentation, access http://localhost:3001 after make all
, or simply run make doc
.
Model Hub
We curate a list of ready-to-use models for VDP. These models are from different sources and have been tested by our team. Want to contribute a new model? Please create an issue, we are happy to test and add it to the list 👐.
Model | Task | Sources | Framework | CPU | GPU | Notes |
---|---|---|---|---|---|---|
MobileNet v2 | Image classification | GitHub, GitHub-DVC | ONNX | ✅ | ✅ | |
YOLOv4 | Object detection | GitHub-DVC | ONNX | ✅ | ✅ | |
YOLOv7 | Object detection | GitHub-DVC | ONNX | ✅ | ✅ | |
Detectron2 Keypoint R-CNN R50-FPN | Keypoint detection | GitHub-DVC | PyTorch | ✅ | ✅ | |
PSNet + EasyOCR | OCR | GitHub-DVC | ONNX | ✅ | ✅ |
Note: The GitHub-DVC
source in the table means importing a model into VDP from a GitHub repository that uses DVC to manage large files.
Community support
For general help using VDP, you can use one of these channels:
-
GitHub - bug reports, feature requests, project discussions and contributions
-
Discord - live discussion with the community and our team
-
Newsletter & Twitter - get the latest updates
If you are interested in hosting service of VDP, we've started signing up users to our private alpha. Get early access and we'll contact you when we're ready.
Contributing
We love contribution to VDP in any forms:
-
Please refer to the guideline for local development.
-
Please open a topic in the repository Discussions for any feature requests.
-
Please open issues for bug report in the repository
-
vdp for general issues;
-
pipeline-backend, connector-backend, model-backend, console, etc., for specific issues.
-
-
Please refer to the VDP project board to track progress.
Note Code in the main branch tracks under-development progress towards the next release and may not work as expected. If you are looking for a stable alpha version, please use latest release.
License
See the LICENSE file for licensing information.