Dwayne Robinson
Dwayne Robinson
Yes! Efficient useability matters, and this bug is annoying in all the extra left clicks it adds during my day (which already average 4000+). I opened Feedback issue on Win11's...
> Interesting... the code was tested on multiple windows pcs and we have never experienced errors. It's possible to get lucky, but that setting is unsupported. Can you get more...
> Does DirectML execution require (*D3D*?) feature level 12_0? @Rikyf3 The DML EP itself creates a DML device using `D3D_FEATURE_LEVEL_11_0`, if that answers your question. See https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/core/providers/dml/dml_provider_factory.cc#L495. I believe `DMLCreateDevice`...
> how CoreML EP handles int64 data type would be a good reference Indeed, I really wonder given all indices are int64 in ONNX.
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI...
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline
Should js/web/docs/webnn-operators.md also be updated, like you did in LSTM?
/azp run Linux Android Emulator QNN CI Pipeline, Windows GPU CUDA CI Pipeline, Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI...