xbatcher
xbatcher copied to clipboard
Separate BatchGenerator into standalone Slicer and Batcher components?
What is your issue?
Current state
Currently, xbatcher v0.3.0's BatchGenerator is this all-in-one class/function that does too many things, and there are more features planned. The 400+ lines of code at https://github.com/xarray-contrib/xbatcher/blob/v0.3.0/xbatcher/generators.py is not something easy for people to understand and contribute to without spending a few hours. To make things more maintainable and future proof, we might need a major refactor.
Proposal
Split BatchGenerator into 2 (or more) subcomponents. Specifically:
- A
Slicerthat does the slicing/subsetting/cropping/tiling/chipping from a multi-dimensionalxarrayobject. - A
Batcherthat groups together the pieces from theSlicerinto batches of data.
These are the parameters from the current BatchGenerator that would be handled by each component:
Slicer:
- input_dims
- input_overlap
Batcher:
- batch_dims
- concat_input_dims
- preload_batch
Benefits
- A NaN checker could be inserted in between
SlicerandBatcher- #158
- #162
- All the extra logic on deleting/adding extra dimensions can be done on the
Batcherside, or in a step post-Batcher- #36
- #127
- Allow for creating train/val/test splits after
Slicerbut beforeBatcher- https://github.com/xarray-contrib/xbatcher/discussions/78
- Also, some people do shuffling after getting slices of data, others may shuffle after batches are created, xref https://github.com/xarray-contrib/xbatcher/pull/170
- Streaming data for performance reasons
- In torchdata, it is possible to have the
Slicerrun in parallel with theBatcher. E.g. with a batch_size of 128,Slicerwould load data up to 128 chips, pass it on toBatcherand feed it to the ML model, while the next round of data processing happens. This is without loading everything into memory. - https://github.com/orgs/xarray-contrib/projects/1
- In torchdata, it is possible to have the
- Flexibility with what step to cache things at
- At https://github.com/xarray-contrib/xbatcher/issues/109, the proposal was to cache things after
Batcherwhen the batches have been generated already. Sometimes though, people might want to setbatch_sizeas a hyperparameter in their ML experimentation, in which case the cache should be done afterSlicer.
- At https://github.com/xarray-contrib/xbatcher/issues/109, the proposal was to cache things after
Cons
- May result in the current one-liner becoming a multi-liner
- Could lead to some backwards incompatibility/breaking changes
Thanks for opening this issue @weiji14! Great idea for a refactor to simplify the code base, promote new contributions, and help solve the web of existing issues!
I think when using concat_input_dims=False, the division between Slicer and Batcher that you suggested makes a lot of sense and would be relatively simple to decouple (at least for those who've spent the time getting familiar with the current implementation).
When using concat_input_dims=True, it's a bit more complicated because batch_dims can impact slicing. Specifically, the input dataset is sliced on the union of input_dims and batch_dims in that case. There are a few options to account for this:
- Break backwards compatibility by not ever slicing on
batch_dims, even whenconcat_input_dims==True batch_dimswould need to also be included inSlicer- A third component could handle slicing on
batch_dimsbetween theSlicerandBatchercomponents - Additional slicing would happen in
Batcherfor this edge case
I expect that option 3 (a separate component for this edge case) would make the most sense. I'll work on this a bit now.
I think this setup would mimic what I'm doing now with my rolling/batching scheme outside of xbatcher. The important thing there is that I can explicitly control the batch sizes, even with predicates involved.
I think if we include predicates though, we need to have a map that can "unbatch" the results because the map may not be straightforward, especially if there are overlaps between the result chips. See #43