Justin Chu
Justin Chu
Some overflow issues? The method should be a protobuf method: https://googleapis.dev/python/protobuf/latest/google/protobuf/message.html#google.protobuf.message.Message.ByteSize Strange that onnx.load did not complain.
Potentially as far as I understand. But I am not sure
Swish can be expressed as a combination of ONNX operators and can be easily fused by the backend. Was there a motivation for including it in the spec?
@gramalingam @xadupre for more thoughts. I suggest creating a pull request if the op is desired. Additionally, it is possible to implement it as a model local function. You may...
Additionally, if you would like to see it as a single unit in the exported PyTorch models by `torch.onnx.dynamo_export`, you are welcome to contribute to https://github.com/microsoft/onnxscript/blob/bec23adc815406e6103dff8463e3386a1be155e7/onnxscript/function_libs/torch_lib/ops/nn.py#L2014
I understand ONNX is trying to keep the set of operators tight. Although this should be less of a concern when an op can be expressed as a function like...
I don't expect casting to happen between floating point values and integers. But this seems to be unclear from the spec. @xadupre do you know?
Downgrade is less well supported at this point. You may add a line in https://github.com/onnx/onnx/blob/67c456ba4747412afb44158a1a889c0fc3349641/onnx/version_converter/convert.h#L566 ``` registerAdapter(std::make_unique("Constant", OpSetID(19), OpSetID(18))); ``` that does the reverse
Is the error message that asked you to downgrade from tf2onnx?
@fatcat-z could you help? Thanks!