moose
moose copied to clipboard
Discussion: Clearer separation between dialects
When trying to introduce Mirrored3Shape
variable instead of ReplicatedShape
, me and @mortendahl came across that this can lead to some logic discrepancies.
For eg: right now we are doing lots of rep.shape
calls. These would turn into mir.shape
but then it will become cumbersome to always retrieve a mirrored placement corresponding to the replicated placement.
One solution to this would be to introduce slightly more types to make the separation between mirrored and replicated dialect more clearer. For eg, having both ReplicatedPublicTensor
and Mirrored3Tensor
, ReplicatedShape
and Mirrored3Shape
when operating at replicated level or mirrored.
If we adopt the solution which inserts more types, there might be some code duplication problems that arise when implementing a kernel for (eg) matrix inversion on cleartext. This can be (partially) avoided by implementing the kernels just for Mirrored tensors and for the ReplicatedPublicTensor
case just call the kernels that were already created for the mirrored type.
Any thoughts if the codebase will diverge into a good path using this idea? Perhaps we could even get rid of the hybrid kernels by placing another level of abstraction on the Host dialect this way (ie have HostTensors wrap Physical tensors).