ort
ort copied to clipboard
Accelerate PyTorch models with ONNX Runtime
it seems the length of filename result in this error. Here is the log. Any idea about solving the problem? thx ``` running build running build_ext building 'aten_op_executor' extension Emitting...
This link tells me ort-inference supports OpenVino: https://github.com/pytorch/ort#-inference "ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™. It is available via the torch-ort-infer python package....
Hello, great job. In the README it seems we just support CUDA backend and openVINO backend but how about TensorRT backend which is used in ONNXruntime by default on Nvidia...
We have calculated the loss of the gate, but does this have any effect on training? Where is this Loss used? ``` logits = self.wg(input) #dim: [bxs, num_experts] if self.k...
This PR adds tests for openvino provider options API along with basic unit tests.
When running `pip install torch-ort` in a conda environment on Windows, I get the following error: > ERROR: Could not find a version that satisfies the requirement onnxruntime-training (from versions:...
Getting this error with pretty simple model. This is direct error from ONNX, but I couldn’t find any methods to register output in ORTInferenceModule Versions: torch Version: 1.12.1 onnx Version:...
Aten op doesn't fallback to native pytorch runtime as expected. **Versions:** Torch - 1.12.0 OnnxRuntime - 1.12.0 Torch-ort-infer - 1.12.0 **Reproduction steps:** ``` import torch from torch_ort import ORTInferenceModule def...