Dmitri Smirnov
Dmitri Smirnov
azp run MacOS CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,onnxruntime-python-checks-ci-pipeline
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux Nuphar CI Pipeline,Linux OpenVINO CI Pipeline
/azp run MacOS CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,onnxruntime-python-checks-ci-pipeline
Anubis run is necessary. It is easier to catch regression on a specific PR rather than hunt then down later.
Please, [see ](https://github.com/onnx/onnx/issues/5072)this as well.
See https://github.com/microsoft/onnxruntime/issues/13658 first. We typically ship `onnxruntime_providers_cuda.dll`, so if ``onnxruntime.dll` can be found then that one should be found too. I am not familiar with MAUI dll search order, but...
This is what I would do. `dumpbin /DEPENDENTS onnxruntime_providers_tensorrt.dll"` and then try running `where` on each of them. I don't think dependency walker still works well.
BTW, I recommend using `OrtValue` API for direct memory access. This reduces amount of garbage.
Cc: @michaelgsharp
The compiler says it cannot find the include file. I am inclined to think that your eigen path specification is either incorrect, or your Eigne does not have that header...