physicsnemo icon indicating copy to clipboard operation
physicsnemo copied to clipboard

[CorrDiff]: move patching logic to data loaders

Open nbren12 opened this issue 1 year ago • 1 comments

          **not for this PR since this code already exists**

Eventually, the loss should not know anything about patches and just treat them like batches (hey this rhymes). This can be achieved by moving the patching logic either to the dataloader or training_loop.py. global_index can then be passed to the loss object. Let's open an issue for this refactor.

Originally posted by @nbren12 in https://github.com/NVIDIA/modulus/pull/401#discussion_r1566566785

nbren12 avatar Apr 16 '24 00:04 nbren12

Patching operation for regression's output could be performed if the dataloader returns coordinate values along with the patched input and target

tge25 avatar Apr 16 '24 18:04 tge25