[W.I.P.] lazyrepeatarray refactor
Description
Currently we lose a lot of the benefits of the lazyrepeatarray with a single matmul or dot operation, because we're forced to convert them to their full sized NumPy arrays to perform those operations.
This PR refactors the LazyRepeatArray class, and gives it true lazy evaluation. The lazyrepeatarray shape gets modified after every new operation, and errors related to improper inputs/shapes are still detected, but evaluation of all the functions will be delayed until the user calls .evaluate()- which will happen during/just before the publish method.
Affected Dependencies
This will primarily affect PhiTensor and GammaTensor.
How has this been tested?
Working prototype is in a Jupyter Notebook @fiza11 is working on benchmarking unit tests to be added before merging
Checklist
- [ ] I have followed the Contribution Guidelines and Code of Conduct
- [ ] I have commented my code following the OpenMined Styleguide
- [ ] I have labeled this PR with the relevant Type labels
- [ ] My changes are covered by tests
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
This likely won't make it to 0.7, so postponing this PR for now- will require changes to almost all Phi and Gamma Tensor operations.