Jane (Yuan) Xu

Results 16 issues of Jane (Yuan) Xu

## This PR seeks to: - [x] add c++ support for an optimize path - [x] add python opt_einsum path passthrough - [x] add opt_einsum to OSS requirements, but a...

cla signed
release notes: linalg_frontend

A "fix" following https://github.com/pytorch/pytorch/pull/90865. Realized that fused is not compatible with torch.jit.is_scripting() when looking at a later line. Took the opportunity to make the code cleaner/slightly more performant (with the...

ciflow/trunk
release notes: nn
topic: performance

## Concern The general concern is that people think they're adding tests to CI but these tests are getting skipped, which is bamboozling. Specifically, I wonder how many tests there...

module: ci
module: tests
triaged

Attempts to fix #92656 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #92731

ciflow/trunk
release notes: nn
topic: bc_breaking

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #92730 * __->__ #92923

release notes: nn

cc. asked by @zou3519 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #96974

ciflow/trunk
topic: not user facing

## Proposal Make the numpy dependency optional, if possible. ## Why? Minimizing dependencies is a general goal as it allows a bigger audience to reap the benefits of this library....

## Description Recently, `torch.einsum` has improved to automatically optimize for multi-contractions if the opt_einsum library is installed. This way, torch users can reap benefits easily. However, this change may inadvertently...

In the past 2-3 weeks, these configs have been bouncing up and down. DALLE2_pytorch, Adam, cuda, amsgrad, maximize DALLE2_pytorch, Adam, cuda, default DALLE2_pytorch, Adam, cuda, foreach DALLE2_pytorch, Adam, cuda, fused,...

optim

This is just a tracking issue to make sure we don't forget cc. @msaroufim

optim