moose
moose copied to clipboard
Make Mirrored a proper dialect
- [ ] move logic from
moose/src/replicated/**
into newmoose/src/mirrored.rs
file - [x] #732
- [ ] introduce
Mirrored3Shape
instead of - [x] introduce proper
Mirrored3FixedTensor
instead of usingAbstractReplicatedFixedTensor
Related: https://github.com/tf-encrypted/runtime/pull/728
Related discussion in here: https://github.com/tf-encrypted/runtime/issues/755
When trying to introduce Mirrored3Shape variable instead of ReplicatedShape, me and @mortendahl came across that this can lead to some logic discrepancies.
For eg: right now we are doing lots of rep.shape calls. These would turn into mir.shape but then it will become cumbersome to always retrieve a mirrored placement corresponding to the replicated placement.
One solution to this would be to introduce slightly more types to make the separation between mirrored and replicated dialect more clearer. For eg, having both ReplicatedPublicTensor and Mirrored3Tensor, ReplicatedShape and Mirrored3Shape when operating at replicated level or mirrored.
If we adopt the solution which inserts more types, there might be some code duplication problems that arise when implementing a kernel for (eg) matrix inversion on cleartext. This can be (partially) avoided by implementing the kernels just for Mirrored tensors and for the ReplicatedPublicTensor case just call the kernels that were already created for the mirrored type.
Any thoughts if the codebase will diverge into a good path using this idea? Perhaps we could even get rid of the hybrid kernels by placing another level of abstraction on the Host dialect this way (ie have HostTensors wrap Physical tensors).