squeezeDet icon indicating copy to clipboard operation
squeezeDet copied to clipboard

Problem converting to TFLite

Open thefpgaguy opened this issue 6 years ago • 3 comments

Hi @BichenWuUCB

I basically tried to retrain the model as it is and managed to get good results when validating. Thanks for sharing the work.

I then try to freeze the graph and convert to TFLite and encountered an error.

Any pointers or hint where should I look will be helping and greatly appreciated.

To freeze the graph, I used the following command and it completed. I got a warning, but I believed it is ok.

python ./tensorflow/tensorflow/python/tools/freeze_graph.py
--input_meta_graph=./tmp/logs/squeezedet/train/model.ckpt-302000.meta
--input_checkpoint=./tmp/logs/squeezedet/train/model.ckpt-302000
--output_graph=freezed.pb --output_node_names="total_loss" --input_binary=True

WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/queue_runner_impl.py:391: init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module.

I then tried to convert to TFLite I get errors.

tflite_convert --graph_def_file freezed.pb --output_file freezed.tflite --input_arrays conv1 --output_arrays conv12 --output_format TFLITE

2019-02-13 06:49:16.937933: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-02-13 06:49:17.051949: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-02-13 06:49:17.052505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085 pciBusID: 0000:01:00.0 totalMemory: 5.94GiB freeMemory: 5.86GiB 2019-02-13 06:49:17.052573: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-02-13 06:49:17.330138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-02-13 06:49:17.330195: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-02-13 06:49:17.330208: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-02-13 06:49:17.330450: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5641 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1) Traceback (most recent call last): File "/usr/local/bin/tflite_convert", line 11, in sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main app.run(main=run_main, argv=sys.argv[:1]) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main _convert_model(tflite_flags) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 100, in _convert_model converter = _get_toco_converter(flags) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 87, in _get_toco_converter return converter_fn(**converter_kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/lite.py", line 272, in from_frozen_graph _import_graph_def(graph_def, name="") File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 422, in import_graph_def raise ValueError(str(e)) ValueError: Input 0 of node IOU_1/Assign was passed float from iou:0 incompatible with expected float_ref.

thefpgaguy avatar Feb 13 '19 06:02 thefpgaguy

Hi @BichenWuUCB

I basically tried to retrain the model as it is and managed to get good results when validating. Thanks for sharing the work.

I then try to freeze the graph and convert to TFLite and encountered an error.

Any pointers or hint where should I look will be helping and greatly appreciated.

To freeze the graph, I used the following command and it completed. I got a warning, but I believed it is ok.

python ./tensorflow/tensorflow/python/tools/freeze_graph.py --input_meta_graph=./tmp/logs/squeezedet/train/model.ckpt-302000.meta --input_checkpoint=./tmp/logs/squeezedet/train/model.ckpt-302000 --output_graph=freezed.pb --output_node_names="total_loss" --input_binary=True

WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/queue_runner_impl.py:391: init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module.

I then tried to convert to TFLite I get errors.

tflite_convert --graph_def_file freezed.pb --output_file freezed.tflite --input_arrays conv1 --output_arrays conv12 --output_format TFLITE

2019-02-13 06:49:16.937933: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-02-13 06:49:17.051949: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-02-13 06:49:17.052505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085 pciBusID: 0000:01:00.0 totalMemory: 5.94GiB freeMemory: 5.86GiB 2019-02-13 06:49:17.052573: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-02-13 06:49:17.330138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-02-13 06:49:17.330195: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-02-13 06:49:17.330208: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-02-13 06:49:17.330450: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5641 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1) Traceback (most recent call last): File "/usr/local/bin/tflite_convert", line 11, in sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main app.run(main=run_main, argv=sys.argv[:1]) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main _convert_model(tflite_flags) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 100, in _convert_model converter = _get_toco_converter(flags) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 87, in _get_toco_converter return converter_fn(**converter_kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/lite.py", line 272, in from_frozen_graph _import_graph_def(graph_def, name="") File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 422, in import_graph_def raise ValueError(str(e)) ValueError: Input 0 of node IOU_1/Assign was passed float from iou:0 incompatible with expected float_ref.

Hi.did you solve your problem?i have this too.

kiad4631 avatar Sep 07 '19 06:09 kiad4631

Hi @BichenWuUCB .Can you say how we can export correct freeze graph of pretrained SqueezeDet checkpoints then convert to tflite model?

kiad4631 avatar Sep 07 '19 12:09 kiad4631

Hi guys. I am trying convert my graph that is output of freeze_graph.py to tflite model. my command for converting squeezeDet checkpoint to graph :

python3 freeze_graph.py --input_meta_graph=/home/davari/detection_on_mobile/squeezeDet/model.ckpt-95000.meta --input_checkpoint=/home/davari/detection_on_mobile/squeezeDet/model.ckpt-95000 --output_node_names="bbox/trimming/bbox,probability/class_idx,probability/score" --input_binary=True --output_graph=/home/davari/detection_on_mobile/squeezeDet/model-95000.pb

and my command for converting to tflite is:

tflite_convert --graph_def_file=/home/davari/detection_on_mobile/squeezeDet/graphs/model-87000.pb --output_file=/home/davari/detection_on_mobile/squeezeDet/graphs/model-87000.tflite --input_arrays=image_input --input_shapes=20,375,1242,3 --output_arrays=interpret_output/bbox_delta

I am sure the input size is ok because in document author said that input image size for squeezeDet is 1242x375.and batch size = 20 and dimension of each image = 3. but when I run this command get this error:

Exception: Placeholder keep_prob should be specied by input_arrays.

and when I replace image_input with keep_prob I get this error:

Check failed: dim_x == dim_y (375 vs. 22)Dimensions must match Fatal Python error: Aborted Aborted (core dumped)

can anyone help me for this?it is very necessary for me.

kiad4631 avatar Sep 12 '19 06:09 kiad4631