Dmytro Dzhulgakov
Dmytro Dzhulgakov
> Another alternative is to leverage Apache Arrow’s Plasma Object Store. The advantage is that we won’t need to implement our own store and will simply wrap Plasma. However this...
> Do you have any examples of using an allocator extension point? For CPU allocator there's some proper wiring in the core library allowing to override the allocator used for...
Don't we call torch.ops.aten.add in FX-based stack quite often? It feel like we should fix the resolution order to be identical. Fixing static initialization order is just a shaky workaround,...
@pytorchbot merge
@pytorchbot merge
So it indeed seems to be a missing functionality. When channel_shuffle was added in #36815 it missed adding the derivative. Given it's a composite op by itself on cpu today,...