Zero Zeng
Zero Zeng
I can reproduce this in 8.2.3. but the issue is fixed in TRT 8.4. I think the error comes from Myelin. cc @jackwish for viz. ``` [09/26/2022-01:52:03] [V] [TRT] >>>>>>>>>>>>>>>...
Yes, Using the official container 22.07 with TRT 8.4.1 works for me. ``` [09/26/2022-01:41:58] [I] === Performance summary === [09/26/2022-01:41:58] [I] Throughput: 23.5252 qps [09/26/2022-01:41:58] [I] Latency: min = 43.3171...
It's in our plan but seems we don't have a target release now. @kevinch-nv for more information.
> https://github.com/onnx/onnx-tensorrt/blob/main/docs/operators.md On this website, tensor-RT support torch.where it's not torch.where, it's ONNX spec. please check your exported ONNX model.
I don't think so. @nvpohanh correct me if I'm wrong
will TRT use the OFA units when we deploy OF or SM models? -> No, TRT doesn't support OFA. Or do we have to use the models specifically provided by...
I don't think so, TensorRT is mainly focused on GPU.
I am not able to answer this question since I know nothing about OFA :)
I can't reproduce it in Linux. @pranavm-nvidia I know nothing about windows TRT, can you help here ^ ^ ``` TensorRT-8.4.3.1/samples/sampleMNIST$ ../../bin/sample_mnist_debug &&&& RUNNING TensorRT.sample_mnist [TensorRT v8403] # ../../bin/sample_mnist_debug [09/07/2022-08:43:15]...
@azhurkevich Can you help here ^ ^