Feliphe Gonçalves Galiza
Feliphe Gonçalves Galiza
I read this test case, but I really didn't understand how to decode at least the most common data types such as boolean, integer, Strings, etc.. I am having a...
Actually I am using toString() on a dataItem. Look: for (DataItem dataItem : dataItems) { // process data item Log.d(TAG, "Decoded CBOR: " + dataItem.toString()); } Maybe a proper toString()...
Actually the method dataItem.getMajorType() is returning MAP, how can I decode a MAP type? It seems I have a MAP with a SIMPLE_VALUE inside.
Hi @tonyreina, thank you for your support! I was able to generate the optimized_graph.pb, by following your instructions: ` bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=/workspace/quantization/frozen_inference_graph.pb --out_graph=/workspace/quantization/optimized_graph.pb --inputs="input_1" --outputs="bboxes,scores,classes" --transforms="fold_batch_norms merge_duplicate_nodes" 2019-04-23 11:49:48.351521: I tensorflow/tools/graph_transforms/transform_graph.cc:317]...
Hi @nammbash, thank you for your support! When I use: `--transforms="fold_batch_norms merge_duplicate_nodes"` in the `transform_graph` script, the generated FP32 graph throws this error when loading: `ValueError: Node 'bn3a_branch1/beta/read' expects to...
Hi @nammbash, I was able to generate the optimized_graph.pb using your instructions: ``` root@569f1de3e047:/workspace/tensorflow# bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=/workspace/quantization/frozen_inference_graph.pb --out_graph=/workspace/quantization/optimized_graph.pb --inputs="input_1" --outputs="bboxes,scores,classes" --transforms='remove_nodes(op=Identity, op=CheckNumerics, op=StopGradient) fold_old_batch_norms strip_unused_nodes merge_duplicate_nodes' 2019-04-24 15:58:49.959464: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying...
Just as an update. I was able to generate the quantized_dynamic_range_graph.pb using an updated version of quantize_graph.py provided by @mdfaijul. The command I used was: `python /tmp/amin/intel-tools/tensorflow_quantization/quantization/quantize_graph.py --input fp32_retinanet_frozen_inference_graph.pb --output...
Hi @karthikvadla, I am generating the calibration data using `logged_quantized_graph.pb` using a subset of 875 images. I've shared more information about this case in a separated email. Best Regards, Feliphe...
Hi all, First of all I would like to thank you all for the help you have been giving me in this journey of enabling INT8 and VNNI for inference...
Hi @nammbash, I am exciting for the release of the new features you and @mdfaijul are working on! Thank you for the explanation, now the possible reason is clear to...