task eval broken for PyTorch engine
The task eval is currently broken for the backend=torch.
The call https://github.com/rwth-i6/returnn/blob/e1762d85a17ebf71a16e6ce333068e231f763abb/returnn/main.py#L563-L570
is incompatible with the torch engine's signature of https://github.com/rwth-i6/returnn/blob/e1762d85a17ebf71a16e6ce333068e231f763abb/returnn/torch/engine.py#L642
Is the eval task intended to be used with PyTorch? I don't see a particular reason why it shouldn't. I have a use case where i want to compare the output of a forward job with a callback with the output of a eval job, for which I know the loss is numerically correct.
Is the eval task intended to be used with PyTorch?
There is no reason why it should not be.
But we have to see whether it is really easily doable to implement all those features. Maybe we also don't really need all those features.
I have a use case where i want to compare the output of a forward job with a callback with the output of a eval job, for which I know the loss is numerically correct.
I don't really understand. Why do you need the eval task for that?
Note, the forward task is strictly more generic, and you can easily implement everything what the eval task would do in there yourself. That's why no-one needed the eval task. You never really need it. It might just be that it is maybe more convenient.