onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

[Web] Running ORT model results in NaN values output

Open nezaBacar opened this issue 1 year ago • 3 comments

Describe the issue

Hi, when I attempt to run inference of .ort model I turn on flags recommended in this issue: https://github.com/microsoft/onnxruntime/issues/13445#issuecomment-1430153341.

const sessionOption = {
    executionProviders: ["wasm"],
    enableMemPattern: false,
    enableCpuMemArena: false,
    extra: {
      session: {
        disable_prepacking: "1",
        use_device_allocator_for_initializers: "0",
        use_ort_model_bytes_directly: "1",
        use_ort_model_bytes_for_initializers: "1"
      }
    }

Not turning them on results in an error, but turning them on results in an output of NaN values.

To reproduce

Run inference session with this model: https://drive.google.com/drive/folders/12tOtPWpANIlCPrsDMHSqkgxdhM0gwW88?usp=sharing and these flags:

const sessionOption = {
    executionProviders: ["wasm"],
    enableMemPattern: false,
    enableCpuMemArena: false,
    extra: {
      session: {
        disable_prepacking: "1",
        use_device_allocator_for_initializers: "0",
        use_ort_model_bytes_directly: "1",
        use_ort_model_bytes_for_initializers: "1"
      }
    }

Urgency

It is somewhat urgent

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.16

Execution Provider

'wasm'/'cpu' (WebAssembly CPU)

nezaBacar avatar Feb 11 '24 15:02 nezaBacar

could you share what exact input data that you feed into the model, so that I can try to reproduce the problem

fs-eire avatar Feb 16 '24 02:02 fs-eire

Thanks for the fast response! I've added you to the demo repository. Just pop the model into the /demo/models folder and make sure that the line 20 in index.html points to model.ort. I noticed that if I try to run the ort model with these flags: let sessionOptions = { executionProviders: [ 'wasm' ], graphOptimizationLevel: 'all', }; the outputs are ok. Should I maybe do it this way? (thought this is not the right way because of this comment: github.com/microsoft/onnxruntime/issues/13445#issuecomment-1430153341)

nezaBacar avatar Feb 18 '24 10:02 nezaBacar

I can reproduce the problem. And I checked the session options. If I remove use_ort_model_bytes_for_initializers: "1", the output value will be correct.

It takes a while for me to figure out why. From the comments in the source code, when specifying use_ort_model_bytes_for_initializers, it requires the model buffer keep valid during the whole life cycle of the inference session. Unfortunately this is not how onnxruntime-web manages the model data. Now, onnxruntime-web frees the model data after inference session is initialized. So using onnxruntime-web with config use_ort_model_bytes_for_initializers is not working.

fs-eire avatar Feb 21 '24 02:02 fs-eire

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

github-actions[bot] avatar Mar 22 '24 15:03 github-actions[bot]