albanD
albanD
During torch installation, when the checks are running at the beginning, you should be able to see if it found OpenBlas or not.
@pytorchbot merge -g
@pytorchbot merge -g
It removes a lot of important codeowners that we rely on to get tagged on PRs... Can we revert this?
@Gamrix sorry about that. The fix is here: https://github.com/pytorch/pytorch/pull/83344
cc @ngimel is this op stable enough that we should migrate both this and nn.Module version to it?
I think that if this is ok to do, we should do the same fix in https://github.com/pytorch/pytorch/blob/f5701a1f9afc1775bbf259b5fc8b502b17a56288/torch/nn/modules/module.py#L1936-L1960
``` NotImplementedError: Could not run 'aten::_foreach_zero_' with arguments from the 'SparseCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build...
Yes. I guess we could make that slow path a CompositeExplicitAutograd implementation so that all the backends get it directly.
Why is only the x86 macos build failing? Not the arm one and not the linux build? Don't we build linux with WERROR as well?