albanD
albanD
I do think adding a new entries to CODEOWNERS for these files for the right people to be able to see these PRs is the way to go. I think...
Yeah as I was afraid. We do have some builds that have both cuda and another device (PU1 or other) in some places. And this diff is actually breaking these...
So I am afraid we need to actually fix the current code to support this properly and we can't make it a hard error at the moment.
I don't really think we will be able to go to Step 2. In particular, I would expect that PU1 backend users should run with any pytorch wheel. That being...
btw all CI is disabled on the repo due to the ongoing npm worm for security. See slack announcement channel or the issue here for details.
@pytorchbot merge Ok it all looks good! Thanks for taking the time to update this!
@pytorchbot merge Dismissed the request for change
> performance-wise the choices won't be too big of an issue, consumer can use a TLS Is that true? Classes can override the method per-instance. So we cannot actually cache...
This is stable and can be closed?
Any details on this @ZainRizvi ?