frugally-deep
frugally-deep copied to clipboard
Cannot read Model, seems error from json library, although json validator works
Hello. I tried to use frugally, but i cannot pass the load_mode stage. I have converted my keras model (look converted fine), validated the output json (also seems frine), but load from C++ code fails on json. So some reason it goes to a string type, where from debug i see type is an array. No idea what i am missing here
Hi, and thanks for the report.
What version of TensorFlow are you using? What version of Keras? What version of nlohmann/json?
Can you upload your original model (.h5
file), so I can re-create the error and debug it?
OK. Manually adjust the json, seems to solve it. IT seems the convert_model added extra brackets.
For example, it created something like
"config": { "input_layers": [ [ [ "input_1", 0, 0 ] ], [ [ "input_2", 0, 0 ] ],
and I have manually adjusted to ` "config": { "input_layers": [ [
"input_1",
0,
0
],
[
"input_2",
0,
0
],`
Now load_model seems to work (no idea if that will predict yet, this is my next step
Glad you solved it. :+1:
However, doing such manual adjustments does not seem like a long-term solution to me. And I'd like to understand why this problem occurred. :scientist:
If you upload your .h5
file, I can try to find the problem and see if I can fix it. Or, maybe you try updating your versions of TensorFlow and Keras to the latest ones to check if using convert_model.py
produces valid output for fdeep::load_model
then.
I am using last versions (at least per vcpkg and pip (for keras) I have attached my hdf5 file which was converted via the convert_model.py model.zip
Thanks! :+1:
Using the following Dockerfile
:
https://gist.github.com/Dobiasd/71b6ae1b0036682654de9bfbc792be7e
I was able to reproduce the error:
terminate called after throwing an instance of 'nlohmann::detail::type_error'
what(): [json.exception.type_error.302] type must be string, but is array
I'll look into it and get back to you here. :detective:
load_model("Flat_2d-2-0_S-1998-2011_6B_ckpt-98-P-0.232-R-0.289-L-0.562-.hdf5").summary()
shows the following output:
https://gist.github.com/Dobiasd/fb9de69a974178b253da499f192538e8
So the first input layer is input_7
. Maybe the model was cut out of some larger model or similar. So re-creating it freshly might help. :shrug:
Also, when converting to json, there is "keras_version": "2.7.0"
. So maybe it would also help to re-create the model with a newer version.
Can you provide a minimal example (Python code) to re-create a model with this input_layers
problem?
I will try saving again with newer version of keras. Wil lupdate.
How did it go?
@shayf Any news?