mediapipe
mediapipe copied to clipboard
Output pose detection None
I want to use mediapipe/modules/pose_detection/pose_detection.tflite I try to watch the name of nodes graphs, but get RuntimeWarning
RuntimeWarning: Unexpected end-group tag: Not all data was converted graph_def.ParseFromString(f.read())
My code:
from tensorflow.python.platform import gfile
with gfile.FastGFile(TFLITE_FILE_PATH,'rb') as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
graph_nodes=[n for n in graph_def.node]
names = []
print(len(graph_nodes)) # 0
Also i try get output from layer Identity:0, but get error that there is not that layer Thanks
Hi @watermellon2018, Could you provide the complete details with respect to the use case and standalone code to reproduce the issue. Thank you!
@kuaashish
def detection(input_data):
TFLITE_FILE_PATH = 'modules/pose_detection/pose_detection.tflite' # some path
from tensorflow.python.platform import gfile
with gfile.FastGFile(TFLITE_FILE_PATH,'rb') as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
graph_nodes=[n for n in graph_def.node]
names = []
print(len(graph_nodes))
for t in graph_nodes:
print(t.name)
path_img = 'frame.png'
img = cv2.imread(path_img)
img, _ = image_to_tensor(img)
img = img[:, :, ::-1]
img = img[np.newaxis, :]
img = img.astype(np.float32)
print(img.shape)
detection(img)
Hi @watermellon2018, Could you please provide the standalone code to reproduce the issue as shared code is not reproducible. Please find the gist for reference. Thank you!
@kuaashish
def detection(input_data):
TFLITE_FILE_PATH = 'modules/pose_detection/pose_detection.tflite' # some path
from tensorflow.python.platform import gfile
with gfile.FastGFile(TFLITE_FILE_PATH,'rb') as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
graph_nodes=[n for n in graph_def.node]
names = []
print(len(graph_nodes))
for t in graph_nodes:
print(t.name)
def image_to_tensor(img, size=(224, 224)):
img = cv2.resize(img, size)
return img
path_img = 'frame.png'
img = cv2.imread(path_img)
img, _ = image_to_tensor(img)
img = img[:, :, ::-1]
img = img[np.newaxis, :]
img = img.astype(np.float32)
print(img.shape)
detection(img)
I solved this way:
def detection_model(input_data):
interpreter = tf.lite.Interpreter(TFLITE_FILE_PATH)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
detection = interpreter.get_tensor(output_details[0]['index'])
score = interpreter.get_tensor(output_details[1]['index'])
return detection, score # 1 2254 12 / 1 2254 1
Hi @watermellon2018, Good to hear that the issue has been resolved. Could we move this issue to close status as it has been resolved and hope does not require further support. Thank you!
Hi @watermellon2018, Could you please respond to the above comment. Thank you!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.