onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

[WebNN EP] Add cache for `MLContext`s in the `WebNNBackend`

Open egalli opened this issue 1 year ago • 0 comments

Description

This change adds a cache of MLContexts keyed by their options to the WebNNBackend. This makes is so that multiple InferenceSessions create with the same options will share the same context.

Motivation and Context

Since MLTensors are tied MLContexts, developer can't easily share tensors between InferenceSession (outside of manually an MLContext and specifying the context options). This leads strange behaviors such as,

const sessionsA = ort.InferenceSession.create(urlA, {
  executionProviders: ["webnn"],
  preferredOutputLocation: "ml-buffer",
});
const sessionsB = ort.InferenceSession.create(urlB, {
  executionProviders: ["webnn"],
});
const temp = await sessionA.run({/* arguments */});
const result = await sessionB.run({"input":temp["output"]}); // ERROR: Failed to execute 'dispatch' on 'MLContext': Invalid inputs: The context of MLGraph doesn't match the context of the MLTensor with name "input".

We encountered this behavior when updating the transformers.js version in the developer preview demos. microsoft/webnn-developer-preview#46

egalli avatar Oct 19 '24 08:10 egalli