CoreML-Models icon indicating copy to clipboard operation
CoreML-Models copied to clipboard

Yolov7 export to CoreML crashing

Open Mercury-ML opened this issue 2 years ago • 4 comments

Yolov7 trained with 1280 x 1280 images (pertained weights / model used: yolov7-d6) Trained with 5 custom labels. Inference runs fine, I added --img-size 1280 to the detect command: !python detect.py --weights my_custom_weights.pt --conf 0.25 --img-size 1280 --source inference/images/my_image.jpg

I updated classLabels[] to my 5 labels (removing the rest) I added --img-size 1280 to the export command ie !python export.py --img-size 1280 --weight my_custom_weights.pt

but on the last cell, output is:

IndexError Traceback (most recent call last) in () 1 # run the functions to add decode layer and NMS to the model. ----> 2 addExportLayerToCoreml(builder) 3 nmsSpec = createNmsModelSpec(builder.spec) 4 combineModelsAndExport(builder.spec, nmsSpec, f"big_data_best.mlmodel") # The model will be saved in this path.

in addExportLayerToCoreml(builder) 45 f"{outputName}_multiplied_xy_by_two"], output_name=f"{outputName}subtracted_0_5_from_xy", mode="ADD", alpha=-0.5) 46 grid = make_grid( ---> 47 featureMapDimensions[i], featureMapDimensions[i]).numpy() 48 # x,y * 2 - 0.5 + grid[i] 49 builder.add_bias(name=f"add_grid_from_xy{outputName}", input_name=f"{outputName}_subtracted_0_5_from_xy",

IndexError: list index out of range


I tried changing cell 8:

featureMapDimensions = [640 // stride for stride in strides] to featureMapDimensions = [1280 // stride for stride in strides]

as well as: builder.add_scale(name=f"normalize_coordinates_{outputName}", input_name=f"{outputName}_raw_coordinates", output_name=f"{outputName}raw_normalized_coordinates", W=torch.tensor([1 / 640]).numpy(), b=0, has_bias=False) to builder.add_scale(name=f"normalize_coordinates{outputName}", input_name=f"{outputName}_raw_coordinates", output_name=f"{outputName}_raw_normalized_coordinates", W=torch.tensor([1 / 1280]).numpy(), b=0, has_bias=False)

neither attempt worked.

Also, Yolov7 export.py CoreML was recently updated. I have previously exported a 1280 x 1280 image size Yolov5 custom trained model to Core ML using this repo.

Any thoughts or ideas would be greatly appreciated!!

Mercury-ML avatar Aug 02 '22 21:08 Mercury-ML

Having same problem with Yolov5 as well

Qwin avatar Sep 22 '22 22:09 Qwin

I tried to make converter to get some variable to a argument such as image size, label name.

https://github.com/junmcenroe/YOLOv7-CoreML-Converter.git

junmcenroe avatar Oct 15 '22 11:10 junmcenroe

Hey @junmcenroe, can I assist you to convert another model in the same domain?

roimulia2 avatar Nov 09 '22 21:11 roimulia2

Hi @roimulia2

I do not catch up your comment correctly, but if you have time to assist to covert another model in the same domain, no issue.

Is your intention is to build other model's converter for yolov7-d6/e6/e6e/tiny/w6? Current my converter with argument of image size, label name is just yolov7, and yolov7x. I found other models has different outputs, not three array, but four array So need to modify.

<yolov7.yolov7x>

  • [12,16, 19,36, 40,28] # P3/8
  • [36,75, 76,55, 72,146] # P4/16
  • [142,110, 192,243, 459,401] # P5/32

<yolov7-w6, yolov7-e6e, yolov7-d6>

  • [ 19,27, 44,40, 38,94 ] # P3/8
  • [ 96,68, 86,152, 180,137 ] # P4/16
  • [ 140,301, 303,264, 238,542 ] # P5/32
  • [ 436,615, 739,380, 925,792 ] # P6/64

For yolo7-tiny, just to replace anchor numbers should be work For yolov7-w6,yolov7-e6e, yolov7-d6, I think need to replace the anchor numbers of YOLOv5P6CoreMLConverter.py which is 4 output model, but finally I check the anchor numbers are the same. So we can use as is. I posted the converter for w/e/d as YOLOv7wedCoreMLConverter.py

https://github.com/junmcenroe/YOLOv7-CoreML-Converter.git

junmcenroe avatar Nov 10 '22 05:11 junmcenroe