Yulong Wang

Results 218 comments of Yulong Wang

Thank you for creating this feature request! Possible discussions: - not sure if we need a ENV flag or session option for this. Maybe some user still prefer the sync...

Which version of onnxruntime-web are you using? `expected magic word 00 61 73 6d, found 3c 21 44 4f @+0`: This is a common error. The WebAssembly HTTP request gets...

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal...

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline,Android CI Pipeline

/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal...

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline,Big Models

/azp run Android CI Pipeline,iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

No. DML is ongoing (https://github.com/microsoft/onnxruntime/pull/19274) and CUDA support is the next. > I'm curious if onnxruntime-node now supports dml and cuda?

> No. DML is ongoing (#19274) and CUDA support is the next. > > > I'm curious if onnxruntime-node now supports dml and cuda? #19274 is merged in main and...