`coremltools` docs - Xcode Model Preview Type for Object Detection?
❓Is there an Xcode Model Preview Type for Object Detection?
Hi there, I'm going through the coremltools docs for exporting PyTorch models.
I've found that there's an Xcode Model Preview Type for Segmentation (see below).
But I'm wondering if there's a similar setup for object detection?
The docs I'm referring to are here: https://apple.github.io/coremltools/docs-guides/source/xcode-model-preview-types.html
Sample code for image segmentation model (from the docs):
# load the model
mlmodel = ct.models.MLModel("SegmentationModel_no_metadata.mlpackage")
labels_json = {"labels": ["background", "aeroplane", "bicycle", "bird", "board", "bottle", "bus", "car", "cat", "chair", "cow", "diningTable", "dog", "horse", "motorbike", "person", "pottedPlant", "sheep", "sofa", "train", "tvOrMonitor"]}
mlmodel.user_defined_metadata["com.apple.coreml.model.preview.type"] = "imageSegmenter"
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)
mlmodel.save("SegmentationModel_with_metadata.mlpackage")
Is there a "com.apple.coreml.model.preview.type" such as this but for "imageDetector" or "objectDetector"?
Update: I found a similar post on the Apple Developer forums but it is unanswered: https://forums.developer.apple.com/forums/thread/718154
Will continue to update this post if I find out more.
@DawerG @aseemw are you aware of any?
If you look at the metadata for the detection models that support preview, they have "com.apple.coreml.model.preview.type" : "objectDetector" in their metadata
https://github.com/woolfel/ml-macos-performance/blob/7238fffeebae2c19aff7debe001c47f994a1fb39/Models/ChickenDetector.mlpackage/Data/com.apple.CoreML/Metadata.json#L7
and two outputs coordinates and confidence
https://github.com/woolfel/ml-macos-performance/blob/7238fffeebae2c19aff7debe001c47f994a1fb39/Models/ChickenDetector.mlpackage/Data/com.apple.CoreML/FeatureDescriptions.json
The coordinates are normalized and in xywh format.
That's probably what you need to make it work. It seems like it's not something documented.
Thank you! @Y-T-G