Jane (Yuan) Xu
Jane (Yuan) Xu
## This PR seeks to: - [x] add c++ support for an optimize path - [x] add python opt_einsum path passthrough - [x] add opt_einsum to OSS requirements, but a...
A "fix" following https://github.com/pytorch/pytorch/pull/90865. Realized that fused is not compatible with torch.jit.is_scripting() when looking at a later line. Took the opportunity to make the code cleaner/slightly more performant (with the...
## Concern The general concern is that people think they're adding tests to CI but these tests are getting skipped, which is bamboozling. Specifically, I wonder how many tests there...
Attempts to fix #92656 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #92731
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #92730 * __->__ #92923
cc. asked by @zou3519 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #96974
## Proposal Make the numpy dependency optional, if possible. ## Why? Minimizing dependencies is a general goal as it allows a bigger audience to reap the benefits of this library....
## Description Recently, `torch.einsum` has improved to automatically optimize for multi-contractions if the opt_einsum library is installed. This way, torch users can reap benefits easily. However, this change may inadvertently...
In the past 2-3 weeks, these configs have been bouncing up and down. DALLE2_pytorch, Adam, cuda, amsgrad, maximize DALLE2_pytorch, Adam, cuda, default DALLE2_pytorch, Adam, cuda, foreach DALLE2_pytorch, Adam, cuda, fused,...
This is just a tracking issue to make sure we don't forget cc. @msaroufim