Implement batching
When we run examples/1_SimpleNet/simplenet.py, the final thing that's executed is effectively
a = [0.0, 1.0, 2.0, 3.0, 4.0]
model(torch.Tensor(a))
This would also work with batching e.g.,
a = [0.0, 1.0, 2.0, 3.0, 4.0]
model(torch.Tensor([a, a]))
We should enable calling batching in this way on the FTorch side, too.
To investigate: is Torch underneath smart enough to do this, or will we have to loop?
It would still be good to implement this on the FTorch side, but worth noting that batching can be incorporated into the pytorch side (to then take arbitrary sized (in one dimension) Fortran arrays) with a little thought and care.
This is what was done for MiMA here: https://github.com/DataWaveProject/MiMA-machine-learning/blob/ML/src/shared/pytorch/arch_davenet.py Though it is not the easiest to follow.
Might still be worth comparing performance between implementing this on the Fortran side vs Torch.
I started experimenting with how FTorch behaves, trying to emulate taking a list of vectors, on this branch: https://github.com/Cambridge-ICCS/FTorch/tree/batching [work in progress]
Looked into this as part of a reply to #343
LibTorch can handle this.
However, batch index should always be the first dimension of the tensor.
I can't off the top of my head remember which ordering of tensor_layout preserves this but we need to be careful.
I have suggested the querant provide a MWE from which we can investigate, otherwise we should.
@Mikolaj-A-Kowalski Just reviewing old issues and assigning you to this as discussed last week.