triton-inference-server topic

List triton-inference-server repositories

BiSeNet

1.4k
Stars
303
Forks
Watchers

Add bisenetv2. My implementation of BiSeNet

yolov4-triton-tensorrt

276
Stars
62
Forks
Watchers

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server

fastDeploy

93
Stars
18
Forks
Watchers

Deploy DL/ ML inference pipelines with minimal extra code.

stable-diffusion-tritonserver

115
Stars
20
Forks
Watchers

Deploy stable diffusion model with onnx/tenorrt + tritonserver

clearml-serving

128
Stars
40
Forks
Watchers

ClearML - Model-Serving Orchestration and Repository Solution

triton_ensemble_model_demo

29
Stars
8
Forks
Watchers

triton server ensemble model demo

Setup-deeplearning-tools

44
Stars
7
Forks
Watchers

Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.

isaac_ros_dnn_inference

98
Stars
14
Forks
Watchers

Hardware-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

YOLOV5_optimization_on_triton

43
Stars
11
Forks
Watchers

Compare multiple optimization methods on triton to imporve model service performance