onnxruntime
onnxruntime copied to clipboard
onnxruntime calculate gradients but no need for training
Describe the issue
When I try to use Gradient operator in inference time, it gives me an error of "ai.onnx.preview.training:Gradient(-1)" is not a registered function/op. I am just curious whether this operator is supported in onnxruntime. If so, could you share an entire example of how to use it. I use it not for training. I just need gradients for something else in inference time.
To reproduce
add_node = onnx.helper.make_node("Add", ["a", "b"], ["c"], name="my_add")
gradient_node = onnx.helper.make_node(
"Gradient",
["a", "b"],
["dc_da", "dc_db"],
name="my_gradient",
domain=AI_ONNX_PREVIEW_TRAINING_DOMAIN,
xs=["a", "b"],
y="c",
)
print('true')
a = np.array(1.0).astype(np.float32)
b = np.array(2.0).astype(np.float32)
c = a + b
# dc / da = d(a+b) / da = 1
dc_da = np.array(1).astype(np.float32)
# db / db = d(a+b) / db = 1
dc_db = np.array(1).astype(np.float32)
graph = onnx.helper.make_graph(
nodes=[add_node, gradient_node],
name="GradientOfAdd",
inputs=[
onnx.helper.make_tensor_value_info("a", onnx.TensorProto.FLOAT, []),
onnx.helper.make_tensor_value_info("b", onnx.TensorProto.FLOAT, []),
],
outputs=[
onnx.helper.make_tensor_value_info("c", onnx.TensorProto.FLOAT, []),
onnx.helper.make_tensor_value_info("dc_da", onnx.TensorProto.FLOAT, []),
onnx.helper.make_tensor_value_info("dc_db", onnx.TensorProto.FLOAT, []),
],
)
opsets = [
onnx.helper.make_operatorsetid(ONNX_DOMAIN, 12),
onnx.helper.make_operatorsetid(AI_ONNX_PREVIEW_TRAINING_DOMAIN, 1),
]
model = onnx.helper.make_model_gen_version(
graph, producer_name="backend-test", opset_imports=opsets
)
rt.InferenceSession(model.SerializeToString())
Urgency
No response
Platform
Mac
OS Version
12.5.1
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.12.1
ONNX Runtime API
Python
Architecture
Other / Unknown
Execution Provider
Default CPU
Execution Provider Library Version
No response
Gradient is a training-only operator and hence not available in the inferencing builds. You may try building ORT with ``--enable_training```.
I installed the training build (pip install onnxruntime-training) and am running into the same issue. The model with the gradient operator is saved properly (only if I include AI_ONNX_PREVIEW_TRAINING_DOMAIN as an opset import), but trying to create an InferenceSession causes the above error. Perhaps there's some way to specify the gradient operator in a SessionOptions object?
model_def = onnx.shape_inference.infer_shapes(model_def)
onnx.checker.check_model(model_def)
onnx.save(model_def, output_model_name)
# No error until following line
ort_sess = ort.InferenceSession(output_model_name, providers=['CPUExecutionProvider'])```
In python torch there is the function disable_grad() . Is there an equivalent in onnxruntimer?
Just ran into this. Looks like you need to:
-
pip install onnxruntime-training
- Create the session with
onnxruntime.TrainingSession
, as insession = onnxruntime.TrainingSession(path, sess_options, providers=["CPUExecutionProvider"])
.