[WebNN EP] Support GRU operator
This PR support Gru operator for WebNN EP. @Honry , @fdwr thanks!
lint complaining
@Honry @fdwr Rebased this PR and updated the code to align with the modified WebNN spec. Please take another look, thanks a lot!
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline
Azure Pipelines successfully started running 1 pipeline(s).
Azure Pipelines successfully started running 7 pipeline(s).
Azure Pipelines successfully started running 9 pipeline(s).
Should js/web/docs/webnn-operators.md also be updated, like you did in LSTM?
Should js/web/docs/webnn-operators.md also be updated, like you did in LSTM?
I think so. @miaobin, please update this file.
/azp run Linux Android Emulator QNN CI Pipeline, Windows GPU CUDA CI Pipeline, Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline
Azure Pipelines successfully started running 4 pipeline(s).
Should js/web/docs/webnn-operators.md also be updated, like you did in LSTM?
I think so. @miaobin, please update this file.
Updated this PR to reach the comments. Please take another look, thanks a lot!
Fixed the issues you caught and please take another look. Thanks a lot!
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline, Windows GPU CUDA CI Pipeline, Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline
Azure Pipelines successfully started running 5 pipeline(s).
Azure Pipelines successfully started running 7 pipeline(s).
Azure Pipelines successfully started running 9 pipeline(s).
All tests passed. Will merge once it looks good to @Honry.
miaobin force-pushed the webnn-ep-gru branch from b41c73a to d8e5b9b 18 hours ago
Please don't force push - just merge from main. Otherwise comparison to previous iterations is impossible, and any unresolved conversations (like Wanming's) no longer link to their original location. :(
Restarting CI...
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline, Windows GPU CUDA CI Pipeline, Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline
Azure Pipelines successfully started running 5 pipeline(s).
Azure Pipelines successfully started running 7 pipeline(s).
Azure Pipelines successfully started running 9 pipeline(s).
Thanks Bin. 👍 Restarting 2 failed CI's once more. 🤞
Thanks Bin. 👍 Restarting 2 failed CI's once more. 🤞
These test failures in Onnxruntime_Linux_GPU_Inference_Distributed_Test failed again and appear to be test issues, and completely unrelated to ORT Web. I will merge Wednesday unless Guenther says otherwise:
View raw log ##[error]Value cannot be null.
2024-09-09T21:21:24.3009175Z b089fbef81cd:151:151 [1] NCCL INFO comm 0x561e359c1c80 rank 1 nranks 4 cudaDev 1 busId 200000 - Destroy COMPLETE 2024-09-09T21:21:25.7808105Z Traceback (most recent call last): 2024-09-09T21:21:25.7809037Z File "/onnxruntime_src/onnxruntime/test/python/onnxruntime_test_distributed.py", line 8, in
2024-09-09T21:21:25.7809391Z import onnxscript 2024-09-09T21:21:25.7810092Z File "/home/onnxruntimedev/miniconda3/lib/python3.8/site-packages/onnxscript/init.py", line 120, in 2024-09-09T21:21:25.7810467Z from . import ir, optimizer, rewriter 2024-09-09T21:21:25.7810974Z File "/home/onnxruntimedev/miniconda3/lib/python3.8/site-packages/onnxscript/optimizer/init.py", line 12, in 2024-09-09T21:21:25.7811356Z from onnxscript.optimizer import _constant_folding, _inliner 2024-09-09T21:21:25.7812572Z File "/home/onnxruntimedev/miniconda3/lib/python3.8/site-packages/onnxscript/optimizer/_inliner.py", line 22, in 2024-09-09T21:21:25.7813118Z CallStack = list[CallSiteId] 2024-09-09T21:21:25.7813536Z TypeError: 'type' object is not subscriptable