physicsnemo
physicsnemo copied to clipboard
[CorrDiff]: move patching logic to data loaders
**not for this PR since this code already exists**
Eventually, the loss should not know anything about patches and just treat them like batches (hey this rhymes). This can be achieved by moving the patching logic either to the dataloader or training_loop.py. global_index can then be passed to the loss object. Let's open an issue for this refactor.
Originally posted by @nbren12 in https://github.com/NVIDIA/modulus/pull/401#discussion_r1566566785
Patching operation for regression's output could be performed if the dataloader returns coordinate values along with the patched input and target