TensorRT
TensorRT copied to clipboard
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Hi guys, I'm automotive engineer. My current work is to migrate a tensorrt 8 project to tensorrt 10. I have already checked the guide on https://docs.nvidia.com/deeplearning/tensorrt/latest/api/migration-guide.html#c But seems there are...
Fix issue https://github.com/NVIDIA/TensorRT/issues/4384
## Description I tried to convert a DinoV2-S (with reg) using trtexec, I see no speed improvements when testing fp16 and the best flag, in fact I consistently see a...
## Description I export a variant of the YOLO model that contains a loop for post-processing abd ran into this error for topological sort. However, the model passes onnx_graphsurgeon's toposort(),...
` def get_plugin_pattern(self): pattern = gs.GraphPattern() input0=pattern.variable() def check_clip_node(node): if "min" in node.attrs or "max" in node.attrs: return True if len(node.inputs) >1: return True return False clip01_min=pattern.constant() clip01_max=pattern.variable() clip01=pattern.add( "clip01",...
## Description ## Environment **TensorRT Version**: 10.7 **NVIDIA GPU**: rtx3090 **NVIDIA Driver Version**: **CUDA Version**: 11.7 **CUDNN Version**: Operating System: Python Version (if applicable): 3.10 Tensorflow Version (if applicable): PyTorch...
train model : [dino link](https://github.com/open-mmlab/mmdetection/blob/main/configs/dino/dino-5scale_swin-l_8xb2-12e_coco.py) firstly, use [mmdeploy](https://github.com/open-mmlab/mmdeploy) convert pytorch model to onnx format, secondly, use Trt builder to generate engine. finally, use `execute_async_v2` method to inference, but result performance...
Typo in demo/Diffusion/, demo/BERT/