FSet89
FSet89
Hi, actually I fixed by pulling the container form docker hub (2nd option), thank you anyway
It seems that the embeddings are very similar, no matter what image is fed. I will try to increase the dataset.
Just in case someone needs it, I solved by replacing `images = features` with `images = features['input'] ` in `model_fn.py`
This is the code I use to export the model. Please note that you have to modify model_fn.py as mentioned above. ``` estimator = tf.estimator.Estimator(model_fn, params=params, model_dir=args.model_dir) features = {'input':...
Thank you. Can you point me out where in your code you add the images for the projector summary?
I noticed the same problem. A temporary solution is to set batch size = 1 in create_graph, but the code needs to be changed in order to support a dynamic...
Hi, have you solved this problem?
``` model_path = 'iris_landmark.tflite' interpreter = tf.lite.Interpreter(model_path=model_path) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() print(output_details) ``` [{'name': 'output_eyes_contours_and_brows', 'index': 384, **'shape': array([ 1, 213]**, dtype=int32), 'shape_signature': array([ 1, 213], dtype=int32),...
Hi @RyanCodes44, did you find an answer to this question? I am interested too. Thanks
@sarattha did you find a solution?