tfjs
tfjs copied to clipboard
Unsupported Ops in the model before optimization TensorScatterUpdate
System information
- TensorFlow.js version (you are using): 2.7.0
Describe the feature and the current behavior/state.
TensorScatterUpdate
is not supported.
2020-11-11 18:40:06.492416: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-11 18:40:06.505295: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f9a9af053e0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-11 18:40:06.505339: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-11-11 18:40:09.047157: I tensorflow/core/grappler/devices.cc:78] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
2020-11-11 18:40:09.047227: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-11-11 18:40:10.009017: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816] Optimization results for grappler item: graph_to_optimize
2020-11-11 18:40:10.009041: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: Graph size after: 11186 nodes (10781), 18356 edges (17949), time = 611.069ms.
2020-11-11 18:40:10.009047: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: function_optimizer did nothing. time = 27.527ms.
Traceback (most recent call last):
File "/Users/waittim/anaconda3/envs/tfjs_convert/bin/tensorflowjs_converter", line 8, in <module>
sys.exit(pip_main())
File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 757, in pip_main
main([' '.join(sys.argv[1:])])
File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 761, in main
convert(argv[0].split(' '))
File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 699, in convert
experiments=args.experiments)
File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 629, in convert_tf_saved_model
initializer_graph=frozen_initializer_graph)
File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 146, in optimize_graph
', '.join(unsupported))
ValueError: Unsupported Ops in the model before optimization
TensorScatterUpdate
Any Other info. The SavedModel is converted from the onnx model(opset_version=11). You can find the model I used here. The output node names are 'StatefulPartitionedCall,StatefulPartitionedCall_1,StatefulPartitionedCall_2'. It's a yolo model. May I know is there any other possible solution besides waiting for the support? Thank you!
cc @annxingyuan
Hope this implemented soon as well!
@PeterL1n Since this Op is very similar to ScatterNd op, we should be able to add the support fairly soon.
Thank you so much! I'm waiting for it!
Any updates on this thread?
Is there a way to know what Pytorch operation is converted into TensorScatterUpdate
to avoid it?
Hope this is implemented!
Hope it gets implemented!
@PeterL1n Since this Op is very similar to ScatterNd op, we should be able to add the support fairly soon.
TensorScatterUpdate from the tensorflow docs:
"This operation is very similar to tf.scatter_nd, except that the updates are scattered onto an existing tensor (as opposed to a zero-tensor). If the memory for the existing tensor cannot be re-used, a copy is made and updated."
tf.raw_ops.TensorScatterUpdate(
tensor, indices, updates, name=None
)
tf.scatter_nd(
indices, updates, shape, name=None
)
A way to re-write TensorScatterUpdate in terms of ScatterNd could be something like:
def TensorScatterUpdate( tensor, indices, updates ):
# zero the indices we want to update
return tensor * ScatterNd(indices, ZerosLike(updates), tensor.shape)
# then add in the updates
+ ScatterNd(indices, updates, tensor.shape)
But before TFJS officially supports that, you could export your model with tensorflowjs_converter
using --skip_op_check
flag, then implement + register TensorScatterUpdate
as a customOp
Doing this freehand, untested, but.. registering your custom op in javascript might be like:
const customTensorScatterUpdate = function(node){
const tensor = node.inputs[0];
const indices = node.inputs[1];
const updates = node.inputs[2]
const zeros = tf.zerosLike(updates)
return tensor * tf.scatterND(indices, zeros, tensor.shape) + tf.scatterND(indices, updates, tensor.shape)
}
tf.registerOp('TensorScatterUpdate', customTensorScatterUpdate);
suspect this implementation would be slower, as it calls scatterND twice. I imagine a faster implementation would just edit the tensor's memory without making copies.
@PeterL1n Since this Op is very similar to ScatterNd op, we should be able to add the support fairly soon.
TensorScatterUpdate from the tensorflow docs: "This operation is very similar to tf.scatter_nd, except that the updates are scattered onto an existing tensor (as opposed to a zero-tensor). If the memory for the existing tensor cannot be re-used, a copy is made and updated."
tf.raw_ops.TensorScatterUpdate( tensor, indices, updates, name=None )
tf.scatter_nd( indices, updates, shape, name=None )
A way to re-write TensorScatterUpdate in terms of ScatterNd could be something like:
def TensorScatterUpdate( tensor, indices, updates ): # zero the indices we want to update return tensor * ScatterNd(indices, ZerosLike(updates), tensor.shape) # then add in the updates + ScatterNd(indices, updates, tensor.shape)
But before TFJS officially supports that, you could export your model with
tensorflowjs_converter
using--skip_op_check
flag, then implement + registerTensorScatterUpdate
as a customOpDoing this freehand, untested, but.. registering your custom op in javascript might be like:
const customTensorScatterUpdate = function(node){ const tensor = node.inputs[0]; const indices = node.inputs[1]; const updates = node.inputs[2] const zeros = tf.zerosLike(updates) return tensor * tf.scatterND(indices, zeros, tensor.shape) + tf.scatterND(indices, updates, tensor.shape) } tf.registerOp('TensorScatterUpdate', customTensorScatterUpdate);
suspect this implementation would be slower, as it calls scatterND twice. I imagine a faster implementation would just edit the tensor's memory without making copies.
For those having problems with this code here's the fix! I actually tested it and it works.
const customTensorScatterUpdate = function(node){
const tensor = node.inputs[0];
const indices = node.inputs[1];
const updates = node.inputs[2]
const zeros = tf.zerosLike(updates)
const a = tf.mul(tensor, tf.scatterND(indices, zeros, tensor.shape));
const b = tf.scatterND(indices, updates, tensor.shape);
return a.add(b)
}
tf.registerOp('TensorScatterUpdate', customTensorScatterUpdate);
Hi, @waittim
Apologize for the delayed response and I see this PR https://github.com/tensorflow/tfjs/pull/7189 got merged so it seems like this issue has been taken care by that PR and also we have updated official documentation for tf.tensorScatterUpdate so Could you please confirm if this issue is resolved for you ? Please feel free to close the issue if it is resolved ? Thank you!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you.
Closing as stale. Please @mention us if this needs more attention.