onnxruntime-silicon icon indicating copy to clipboard operation
onnxruntime-silicon copied to clipboard

ONNX Runtime prebuilt wheels for Apple Silicon (M1 / M2 / M3 / ARM64)

Results 10 onnxruntime-silicon issues
Sort by recently updated
recently updated
newest added

```python3 #!/usr/bin/env python3 import onnxruntime as rt import numpy from onnxruntime.datasets import get_example print(rt.get_device()) print(rt.__version__) print('========') def test(): print("running simple inference test...") example1 = get_example("sigmoid.onnx") sess = rt.InferenceSession(example1, providers=rt.get_available_providers()) input_name...

question

I am doing a face model https://github.com/iperov/DeepFaceLive/releases/download/ZAHAR_LUPIN/Zahar_Lupin.dfm environment: onnxruntime-coreml == 1.13.1 onnxruntime-silicon == 1.13.1 device : Apple Silicon M1 python: import onnx import onnxruntime as rt options = rt.SessionOptions() options.log_severity_level...

question

I'm trying to run inferences on [Depth Anything ONNX](https://github.com/fabio-sim/Depth-Anything-ONNX) models on macOS M1 in python with `onnxruntime-silicon`. The models are converted from pth to ONNX. I can run the inference...

Hey my friend, please release `onnxruntime-silicon==1.17.0` I guess it's the perfect opportunity to thank for all the efforts in the past months.

# Improvements ## ditch `rm -rf` I was genuinely embarrassed by its effects, so it inspired me for this PR :) ## options the following options implemented: `--no-clean` - suppresses...

enhancement

Hi, I'm testing fastembed.js that using onnx (from nodejs), how can I replace the default onnxruntime with onnxruntime-silicon ? Thanks

I'm not yet sure what I did, I messed around with the onnxruntime source to make sure that CPU-only provider flag was changed, casually into require ANE., and eventually got...

Thanks for the great work. I`ve been using this since ort-1.13, on a MBP with M1 Pro chip. The problem is, after I updated my system from macos 13 to...

An unofficial fork onnxruntime 1.14.1 with memory leak fixes and build improvements for M1/M2 chips. Up to 1.14.1 (including) builds had a memory leak issue, which was fixed in the...

Here's the embedding code : ``` from optimum.onnxruntime import ORTModelForFeatureExtraction from transformers import AutoModel, AutoTokenizer import numpy as np model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-small-en-v1.5', file_name="onnx/model.onnx") tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-small-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-small-en-v1.5') ......