[Mobile] iOS - ZipMap output cannot be read
Describe the issue
I use the objc bindings for onnxruntime in my iOS app. I load my ML classifier (onnx model), supply it inputs & get valid responses
The code looks like this: // Create ORTValue for input data let inputTensor = try createORTValueFromEmbeddings(inputData)
// Step 2: Prepare input and run session
let inputs: [String: ORTValue] = [
"float_input": inputTensor,
]
let outputs = try ortSession.run(
withInputs: inputs,
outputNames: ["output_probability", "output_label"],
runOptions: nil
)
let endTime = DispatchTime.now()
print("ORT session run time: \(Float(endTime.uptimeNanoseconds - startTime.uptimeNanoseconds) / 1.0e6) ms")
guard let _ = outputs["output_probability"], let labels = outputs["output_label"] else {
throw CloneInferenceError.Error("Failed to get model output.")
}
let labelsData = try labels.tensorData() as Data
let labelValue = labelsData.withUnsafeBytes { (buffer: UnsafeRawBufferPointer) -> Int64 in
let int64Buffer = buffer.bindMemory(to: Int64.self)
return int64Buffer[0]
}
if labelValue == 0 {
return "Audio is real"
} else {
return "Audio is cloned"
}
I am able to process/read "output_label" (0 or 1 value) just fine as an Int64. However, "output_probability" which is a float - I just cannot read it using similar steps. Note that it is produced by a ZipMap but "output_label" isn't. Any suggestions?
To reproduce
You cna use this onnx model: https://yella.co.in/cvd-samples/classifier.onnx
And use the objC onnxruntime sample to load it & evaluate it to get the outputs (like above).
Urgency
Its a showstopper for me because w/o output probabilities, the model is kindof useless actually.
Platform
iOS
OS Version
17.6.1
ONNX Runtime Installation
Released Package
Compiler Version (if 'Built from Source')
No response
Package Name (if 'Released Package')
onnxruntime-mobile
ONNX Runtime Version or Commit ID
1.19.0
ONNX Runtime API
Objective-C/Swift
Architecture
ARM64
Execution Provider
Default CPU
Execution Provider Library Version
No response
I was not able to access the model from provided link. I believe that zipmap op outputs map/dictionary not float
Do you need the ZipMap operator in the model? May be simpler to return the output from the LinearClassifier directly.
I was not able to access the model from provided link. I believe that zipmap op outputs map/dictionary not float
You can try "wget https://yella.co.in/cvd-samples/classifier.onnx" - worked for me.
I believe that zipmap op outputs map/dictionary not float
The Objective-C API doesn't support non-tensor ORTValue types at the moment and ZipMap does not output a tensor. A workaround for now is to avoid using non-tensor types in your model or to use the C/C++ API. For the former, directly using the tensor output of LinearClassifier should work.
I believe that zipmap op outputs map/dictionary not float
The Objective-C API doesn't support non-tensor ORTValue types at the moment and ZipMap does not output a tensor. A workaround for now is to avoid using non-tensor types in your model or to use the C/C++ API. For the former, directly using the tensor output of LinearClassifier should work.
Thanks for clarifying (my tests back these conclusions). wrt avoiding ZipMaps, from my limited understanding, this is whats causing onnx to spit out Zipmaps (?):
during prediction: self.y_pred = self.model.predict(self.X_test)
during mlflow logging: mlflow.sklearn.log_model( model.model, "model_" + run.info.run_name, signature=signature )
ZipMaps appear "natural" to logging scikit classifiers (map "0" to some probability, "1" etc). If you know of some mechanism to, say force log_model to output an array instead, please lmk. I will update this thread with my findings.
One workaround that worked for me was:
- Modify the current onnx graph. To output a tensor of floats directly: import onnx from onnx import helper, shape_inference
def convert_output_to_float_tensor(onnx_model_path, new_model_path): # Load the original ONNX model model = onnx.load(onnx_model_path)
# Create a new model graph
graph = model.graph
# Find and remove the ZipMap node if it exists
zipmap_nodes = []
zipmap_input = None
zipmap_output = None
for node in graph.node:
if node.op_type == "ZipMap":
zipmap_nodes.append(node)
zipmap_input = node.input[0] # Classifier output connected to ZipMap input
zipmap_output = node.output[0] # ZipMap output
# Remove all ZipMap nodes from the graph
if zipmap_nodes:
for zipmap_node in zipmap_nodes:
graph.node.remove(zipmap_node)
print(f"Removed ZipMap node: {zipmap_node.name}")
# Now, reconnect the classifier's output to the model's final output
for output in graph.output:
if output.name == zipmap_output:
# Create a new output tensor for probabilities
new_output = helper.make_tensor_value_info("output_probability", onnx.TensorProto.FLOAT, [2]) # Assuming 2 classes
graph.output.remove(output) # Remove the old output
graph.output.append(new_output) # Add the new output
# Find the node connected to the original output
for node in graph.node:
if zipmap_input in node.output: # Find the node whose output is connected to ZipMap input
# Create a new output connection
node.output.remove(zipmap_input) # Remove ZipMap input from the node's outputs
node.output.append("output_probability") # Connect to new output
print("Connected output_probability to the linear classifier output.")
# Infer shapes of tensors
inferred_model = shape_inference.infer_shapes(model)
# Save the modified ONNX model
onnx.save(inferred_model, new_model_path)
print(f"Saved modified model with float32[2] output as 'output_probability' to {new_model_path}")
Define the paths for the original and new models
original_model_path = "input.onnx" new_model_path = "input_no_zipmap.onnx"
Run the function to modify the model and convert the output type
convert_output_to_float_tensor(original_model_path, new_model_path)
-
handle the tensor of floats in the swift code appropriately: func evaluate(inputData: [Double]) -> Result<String, Error> { return Result<String, Error> { () -> String in let startTime = DispatchTime.now()
// Step 1: Create ORTValue for input data let inputTensor = try createORTValueFromEmbeddings(inputData) // Step 2: Prepare input and run session let inputs: [String: ORTValue] = [ "float_input": inputTensor, ] let outputs = try ortSession.run( withInputs: inputs, outputNames: ["output_probability", "output_label"], runOptions: nil ) let endTime = DispatchTime.now() print("ORT session run time: \(Float(endTime.uptimeNanoseconds - startTime.uptimeNanoseconds) / 1.0e6) ms") guard let outputProbabilities = outputs["output_probability"], let labels = outputs["output_label"] else { throw CloneInferenceError.Error("Failed to get model output.") } // Step 3: Extract label value let labelsData = try labels.tensorData() as Data let labelValue = labelsData.withUnsafeBytes { (buffer: UnsafeRawBufferPointer) -> Int64 in let int64Buffer = buffer.bindMemory(to: Int64.self) return int64Buffer[0] } // Step 4: Extract probabilities let probabilitiesData = try outputProbabilities.tensorData() as Data let probabilities = probabilitiesData.withUnsafeBytes { (buffer: UnsafeRawBufferPointer) -> [Float] in let floatBuffer = buffer.bindMemory(to: Float.self) return Array(floatBuffer.prefix(2)) // Assuming two classes } // Print out probabilities print("Probabilities: \(probabilities)") // Step 5: Return the result based on the label value if labelValue == 0 { return "Success" } else { return "Failure" }} }
There is an option to avoid the ZipMap during sklearn export to ONNX.
https://onnx.ai/sklearn-onnx/auto_examples/plot_convert_zipmap.html
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.