keras-yolo3
keras-yolo3 copied to clipboard
converting trained model doesn't work, while the pretrained model works..?(coreML)
hey! thanks! I successfully trained my first model, but after that I am not able to convert it using:
" import coremltools
coreml_model = coremltools.converters.keras.convert('model_data/trained_weights_final.h5', input_names='input1', image_input_names='input1', output_names=['output1', 'output2', 'output3'], image_scale=1/255.)
coreml_model.input_description['input1'] = 'Input image' coreml_model.output_description['output1'] = 'The 13x13 grid (Scale1)' coreml_model.output_description['output2'] = 'The 26x26 grid (Scale2)' coreml_model.output_description['output3'] = 'The 52x52 grid (Scale3)'
coreml_model.author = 'asdas' coreml_model.license = 'BlaBla' coreml_model.short_description = "erster YoloVersuch :D"
coreml_model.save('Yolov3.mlmodel') "
i am able to convert the pertrained model, but after training that model the converter says : " python3 converth5ToCoreML.py
Traceback (most recent call last):
File "converth5ToCoreML.py", line 3, in
coreml_model = coremltools.converters.keras.convert('/Users/robinsonhus0/Desktop/neuer/logs/000/trained_weights_final.h5', input_names='input1', image_input_names='input1', output_names=['output1', 'output2', 'output3'], image_scale=1/255.) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 793, in convert
respect_trainable=respect_trainable) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 579, in convertToSpec
respect_trainable=respect_trainable) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/coremltools/converters/keras/_keras2_converter.py", line 311, in _convert
model = _keras.models.load_model(model, custom_objects = custom_objects) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/saving.py", line 221, in _deserialize_model
model_config = f['model_config'] File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/utils/io_utils.py", line 302, in getitem
raise ValueError('Cannot create group in read only mode.') ValueError: Cannot create group in read only mode."
any idea? thank you !
do you finish it ?
do you finish it ?
No, I used turicreate instead.
thanks friend,can you tell me how use turicreate to transform h5 to mlmodel?
do you finish it ?
No, I used turicreate instead.
hey!friend, i know how to solve your problem,because your .h5 model only save_weights, there is no darknet. but i got this warning: no training configuration found in save file .Anyway , i don't konw weather it works
do you finish it ?
No, I used turicreate instead.
hey!friend, i know how to solve your problem,because your .h5 model only save_weights, there is no darknet. but i got this warning: no training configuration found in save file .Anyway , i don't konw weather it works
i tried it using save_weights only and saving the whole model. and why am i losing the model config while training ? i am able to convert models before training it.
you can use turicreate with the terminal if you need to specify the training modus. if you don't care about that you can simply use CreateML which is part of Apples Xcode.
i have converted h5 to mlmodel,your are right,this code produced final_h5 is save_weights, i do following step:
- i converted final_h5 to darknet's weights and cfg, refered to https://github.com/caimingxie/h5_to_weight_yolo3.
- i use this code--convert.py to convert new weights and cfg to new demo.h5
- i converted new demo.h5 succeefully hope this would help you. but bro. now i have the yolov3.mlmodel, i am not proficient in other languages,so do you have some code to process the output of the mlmodel? i dont know how to write it such nms and so on in other language
maybe try adding input_name_shape_dict={'image': [None, 416, 416, 3]}
on the parameters (use your model's width and height)
import coremltools
coreml_model = coremltools.converters.keras.convert(
'./yolo.h5',
input_names='input1',
image_input_names='input1',
output_names=['output1','output2','output3'],
input_name_shape_dict={'image': [None, 416, 416, 3]},
image_scale=1/255.)
coreml_model.input_description['input1'] = 'RGB image 416x416'
coreml_model.output_description['output1'] = 'The 13x13 grid (Scale1)'
coreml_model.output_description['output2'] = 'The 26x26 grid (Scale2)'
coreml_model.output_description['output3'] = 'The 52x52 grid (Scale3)'
coreml_model.author = 'test'
#coreml_model.license = 'Public Domain'
coreml_model.short_description = "5000 epochs"
coreml_model.save('Yolov3.mlmodel')
this generates a mlmodel but they are not the same input and output parameters as the pretrained YOLOv3 model: Pretrained YOLOv3 model:
input {
name: "image"
shortDescription: "416x416 RGB image"
type {
imageType {
width: 416
height: 416
colorSpace: RGB
}
}
}
input {
name: "iouThreshold"
shortDescription: "This defines the radius of suppression."
type {
doubleType {
}
isOptional: true
}
}
input {
name: "confidenceThreshold"
shortDescription: "Remove bounding boxes below this threshold (confidences should be nonnegative)."
type {
doubleType {
}
isOptional: true
}
}
output {
name: "confidence"
shortDescription: "Confidence derived for each of the bounding boxes. "
type {
multiArrayType {
dataType: DOUBLE
shapeRange {
sizeRanges {
upperBound: -1
}
sizeRanges {
lowerBound: 80
upperBound: 80
}
}
}
}
}
output {
name: "coordinates"
shortDescription: "Normalised coordiantes (relative to the image size) for each of the bounding boxes (x,y,w,h). "
type {
multiArrayType {
dataType: DOUBLE
shapeRange {
sizeRanges {
upperBound: -1
}
sizeRanges {
lowerBound: 4
upperBound: 4
}
}
}
}
}
Custom YOLOv3 model:
input {
name: "input1"
shortDescription: "RGB image 416x416"
type {
imageType {
width: 416
height: 416
colorSpace: RGB
}
}
}
output {
name: "output1"
shortDescription: "The 13x13 grid (Scale1)"
type {
multiArrayType {
shape: 129
shape: 13
shape: 13
dataType: DOUBLE
}
}
}
output {
name: "output2"
shortDescription: "The 26x26 grid (Scale2)"
type {
multiArrayType {
shape: 129
shape: 26
shape: 26
dataType: DOUBLE
}
}
}
output {
name: "output3"
shortDescription: "The 52x52 grid (Scale3)"
type {
multiArrayType {
shape: 129
shape: 52
shape: 52
dataType: DOUBLE
}
}
}
if I try to convert the model using the same input parameters from the pretrained model, I get this
UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
Input name length mismatch
Output name length mismatch
On the console. the mlmodel is created but when trying to deploy on xcode I get an error on VNCoreMLRequest:
func startObjectDetection(tgtImg: UIImage){
guard let model = try? VNCoreMLModel(for:Yolov3().model) else {
print("failed to load model")
return
}
let handler = VNImageRequestHandler(cgImage: tgtImg.cgImage!, options: [:])
let request = createRequest(model: model)
try? handler.perform([request])
}
func createRequest(model: VNCoreMLModel) -> VNCoreMLRequest {
return VNCoreMLRequest(model: model, completionHandler: { (request, error) in
DispatchQueue.main.async(execute: {
guard let results = request.results as? [VNRecognizedObjectObservation] else {
fatalError("Error results") // <-----------here
}
for result in results {
print("\(result.confidence) : \(result.boundingBox)")
let len = result.labels.count > 5 ? 5 : result.labels.count
for i in 0..<len{
print("\(result.labels[i].identifier), ", terminator: "")
}
}
})
})
}
so I'm not an expert but looks like if we can convert the keras model to mlmodel with the same input and output parameters, then it may work, that is sending the confidence and iouthreshold from the model, but for what I understand from convert.py, the 3 outputs do not include the confidence and iou, or do they? is it possible to edit the outputs to match the pre trained mlmodel?
thank you-