unsqueeze or expand dims not supported for complex64,
@register_torch_op(torch_alias=["unsqueeze_copy"]) def unsqueeze(context, node): inputs = _get_inputs(context, node, expected=2) unsqueeze = mb.expand_dims(x=inputs[0], axes=[inputs[1].val], name=node.name) context.add(unsqueeze)
also ValueError: Op "1298" (op_type: expand_dims) Input x="freqs_cis.1" expects tensor or scalar of dtype from type domain ['fp16', 'fp32', 'int8', 'int16', 'int32', 'uint8', 'uint16', 'bool'] but got tensor[1024,32,complex64]
unsqueeze or expand dims not supported for complex64, any workaround or can it be supported?? @junpeiz @srjoglekar246 ??
Hey @nighting0le01, you can follow a similar way about how add op support complex: https://github.com/apple/coremltools/blob/main/coremltools/converters/mil/frontend/torch/ops.py#L917
@LowerComplex.register_lower_func(op_type="complex_real") def _lower_complex_real(op: Operation): complex_input: ComplexVar = op.data # Use an identity op to avoid the block's input name inconsistency issue. If we directly use # complex_input.real, the var's name could be inconsistent with the block's input name. breakpoint() if complex_input.real==None: breakpoint() return mb.identity(x=None, before_op=op) result = mb.identity(x=complex_input.real, before_op=op) return result
also mb.identity breaks when complex.real is None