Carlos Mocholí
Carlos Mocholí
The failing TPU job is for Fabric only, but Fabric isn't impacted by this PR so it must be an issue from somewhere else
@amorehead Have you tried reporting this on PyTorch? You would expect that `cpu_thing.to(cpu)` is always a no-op
Yes, we can merge this, but I would like to hear from their team first before moving forward. Then we could have this: ```python if not _TORCH_GREATER_EQUAL_2_2: # your patch...
@mees I added support for that in #17163, if you want to give it a try. The PR only implements it for validation and testing.
Unfortunately, I dont have bandwidth to work on this now. If somebody wants to try, I can help getting the PR merged. You can follow the structure in the EvaluationLoop....
As far as I know, nobody is currently working on it, Lukas
I'll look at Issue 1 and 3. Thanks. Issue 2 is expected and the point of this PR, that is, testing using its own index instead of the training global...
Pre-existing issue: https://github.com/Lightning-AI/pytorch-lightning/issues/13246
This is a fair ask that has come up a few times in the past. Users want to configure `strict=True` here: https://github.com/PyTorchLightning/pytorch-lightning/blob/83436ee3dfd0d4079e0f8e704ba76aca672af19d/pytorch_lightning/strategies/strategy.py#L317-L322 I can think of two solutions: (a): Route...
Sounds good to me. @Ir1d Would you be interested in contributing this feature?