frugally-deep icon indicating copy to clipboard operation
frugally-deep copied to clipboard

Cannot read Model, seems error from json library, although json validator works

Open shayf opened this issue 2 years ago • 8 comments

Hello. I tried to use frugally, but i cannot pass the load_mode stage. I have converted my keras model (look converted fine), validated the output json (also seems frine), but load from C++ code fails on json. So some reason it goes to a string type, where from debug i see type is an array. No idea what i am missing here

image

shayf avatar Jun 28 '22 12:06 shayf

Hi, and thanks for the report.

What version of TensorFlow are you using? What version of Keras? What version of nlohmann/json?

Can you upload your original model (.h5 file), so I can re-create the error and debug it?

Dobiasd avatar Jun 28 '22 12:06 Dobiasd

OK. Manually adjust the json, seems to solve it. IT seems the convert_model added extra brackets. For example, it created something like "config": { "input_layers": [ [ [ "input_1", 0, 0 ] ], [ [ "input_2", 0, 0 ] ],

and I have manually adjusted to ` "config": { "input_layers": [ [

        "input_1",
        0,
        0
      
    ],
    [
      
        "input_2",
        0,
        0
      
    ],`

Now load_model seems to work (no idea if that will predict yet, this is my next step

shayf avatar Jun 28 '22 13:06 shayf

Glad you solved it. :+1:

However, doing such manual adjustments does not seem like a long-term solution to me. And I'd like to understand why this problem occurred. :scientist:

If you upload your .h5 file, I can try to find the problem and see if I can fix it. Or, maybe you try updating your versions of TensorFlow and Keras to the latest ones to check if using convert_model.py produces valid output for fdeep::load_model then.

Dobiasd avatar Jun 28 '22 13:06 Dobiasd

I am using last versions (at least per vcpkg and pip (for keras) I have attached my hdf5 file which was converted via the convert_model.py model.zip

shayf avatar Jun 28 '22 13:06 shayf

Thanks! :+1:

Using the following Dockerfile:

https://gist.github.com/Dobiasd/71b6ae1b0036682654de9bfbc792be7e

I was able to reproduce the error:

terminate called after throwing an instance of 'nlohmann::detail::type_error'
  what():  [json.exception.type_error.302] type must be string, but is array

I'll look into it and get back to you here. :detective:

Dobiasd avatar Jun 28 '22 14:06 Dobiasd

load_model("Flat_2d-2-0_S-1998-2011_6B_ckpt-98-P-0.232-R-0.289-L-0.562-.hdf5").summary() shows the following output: https://gist.github.com/Dobiasd/fb9de69a974178b253da499f192538e8

So the first input layer is input_7. Maybe the model was cut out of some larger model or similar. So re-creating it freshly might help. :shrug:

Also, when converting to json, there is "keras_version": "2.7.0". So maybe it would also help to re-create the model with a newer version.

Can you provide a minimal example (Python code) to re-create a model with this input_layers problem?

Dobiasd avatar Jun 28 '22 15:06 Dobiasd

I will try saving again with newer version of keras. Wil lupdate.

shayf avatar Jun 29 '22 14:06 shayf

How did it go?

Dobiasd avatar Jul 23 '22 05:07 Dobiasd

@shayf Any news?

Dobiasd avatar Aug 24 '22 11:08 Dobiasd