Chi Lo

Results 18 comments of Chi Lo

@datinje > Then what is the purpose of this option ? One of the purposes of using this disable_cpu_ep_fallback is to make sure all the nodes are placed on GPUs...

> I tested again my model with latest onnxrt 1.17.1 and got same performance results between TRT EP and CUDA EP. I would have expected that TRT EP would have...

if no parser related option specified or `--use_tensorrt_builtin_parser` is specified --> TRT EP will dynamically link against built-in parser. if `--use_oss_trt_parser` is sepcified --> ORT will build the onnx-tensorrt parser...

/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows x64 QNN CI Pipeline

/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows x64 QNN CI Pipeline

/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows x64 QNN CI Pipeline

@tianleiwu Nvidia wants to let us know this API migration issue for integration. Please help when you have time, thanks.