cvu
cvu copied to clipboard
Computer Vision deployment tools for dummies and experts. CVU aims at making CV pipelines easier to build and consistent around platforms, devices, and models.
CVU: Computer Vision Utils 
Computer Vision deployment tools for dummies and experts.
Whether you are developing an optimized computer vision pipeline or just looking to use some quick computer vision in your project, CVU can help! Designed to be used by both the expert and the novice, CVU
aims at making CV pipelines easier to build and consistent around platforms, devices and models.
pip install cvu-python
Index 📋
-
Getting Started
- Why CVU?
-
Object Detection (YOLOv5)
- Devices (CPU, GPU, TPU)
- Benchmark-Tool (YOLOv5)
- Benchmarks Results (YOLOv5)
- Precission Accuracy (YOLOv5)
- Examples
- References
CVU
Says Hi!
Index
How many installation-steps and lines of code will you need to run object detection on a video with a TensorRT backend? How complicated is it be to test that pipeline in Colab?
With CVU , you just need the following! No extra installation steps needed to run on Colab, just pip install our tool, and you're all set to go!
from vidsz.opencv import Reader, Writer
from cvu.detector import Detector
# set video reader and writer, you can also use normal OpenCV
reader = Reader("example.mp4")
writer = Writer(reader, name="output.mp4")
# create detector with tensorrt backend having fp16 precision by default
detector = Detector(classes="coco", backend="tensorrt")
# process frames
for frame in reader:
# make predictions.
preds = detector(frame)
# draw it on frame
preds.draw(frame)
# write it to output
writer.write(frame)
writer.release()
reader.release()
Want to use less lines of code? How about this!
from cvu.detector import Detector
from vidsz.opencv import Reader, Writer
detector = Detector(classes="coco", backend="tensorrt")
with Reader("example.mp4") as reader:
with Writer(reader, name="output.mp4") as writer:
writer.write_all(map(lambda frame:detector(frame).draw(frame), reader))
Want to switch to non-cuda device? Just set device="cpu"
, and backend to "onnx"
, "tflite"
, "torch"
or "tensorflow"
.
detector = Detector(classes="coco", backend="onnx", device="cpu")
Want to use TPU? Just set device="tpu"
and choose a supported backend (only "tensorflow"
supported as of the latest release)
detector = Detector(classes="coco", backend="tensorflow", device="tpu")
You can change devices, platforms and backends as much as you want, without having to change your pipeline.
Devices
Index
Support Info
Following is latest support matrix
Device | TensorFlow | Torch | TFLite | ONNX | TensorRT |
---|---|---|---|---|---|
GPU | ✅ | ✅ | ❌ | ✅ | ✅ |
CPU | ✅ | ✅ | ✅ | ✅ | ❌ |
TPU | ✅ | ❌ | ❌ | ❌ | ❌ |
Recommended Backends (in order)
Based on FPS performance and various benchmarks
- GPU:
TensorRT
>Torch
>ONNX
>TensorFlow
- CPU:
ONNX
>TFLite
>TensorFlow
>Torch
- TPU:
TensorFlow