ivis icon indicating copy to clipboard operation
ivis copied to clipboard

Custom generator for training on out-of-memory datasets

Open candalfigomoro opened this issue 5 years ago • 5 comments

In https://bering-ivis.readthedocs.io/en/latest/oom_datasets.html, for out-of-memory datasets, you say to train on h5 files that exist on disk.

In my case, I can't use h5 files, but I could use a custom generator which yields numpy array batched data.

Is there a way to provide batched data through a custom generator function? Something like keras' fit_generator.

Thank you

candalfigomoro avatar Jan 27 '20 16:01 candalfigomoro

Hi thanks for raising this, we agree that this would be a useful feature to have. We are looking into whether we could support the use of generators with ivis.

Szubie avatar Jan 28 '20 11:01 Szubie

Just as an update to this, using a generator to train ivis is difficult since the triplet sampling algorithm may need to retrieve KNNs or negative points that are not in the batch - we normally index the dataset to retrieve these efficiently, but a generator can only be iterated over.

We are exploring other potential ways we could make training on out-of-memory data easier, so will leave this issue open as we look into it.

Szubie avatar Feb 12 '20 12:02 Szubie

@Szubie Thank you very much for the update.

As a side note, in https://bering-ivis.readthedocs.io/en/latest/oom_datasets.html I read:

When training on a h5 dataset, we recommend to use the shuffle_mode='batch' option in the fit method. This will speed up the training process by pulling a batch of data from disk and shuffling tethat batch, rather than shuffling across the whole dataset.

I don't know if this is a custom training strategy, but if you use the keras' fit() method, my understanding is that "batch shuffle" doesn't shuffle rows inside batches, but it shuffles the batches order (please correct me if I'm wrong).

candalfigomoro avatar Feb 12 '20 14:02 candalfigomoro

I don't know if this is a custom training strategy, but if you use the keras' fit() method, my understanding is that "batch shuffle" doesn't shuffle rows inside batches, but it shuffles the batches order (please correct me if I'm wrong).

That's right.

Each triplet is made up of three data points: 1) the anchor, 2) the positive example (one of the k-nearest neighbors), and 3) a negative example. The keras fit method only shuffles the anchors - when using the 'batch' shuffle mode, anchors are shuffled within a batch.

But each anchor data point then needs to be combined with a positive and negative example in order to create a triplet. And these points may be in a completely different part of the data, outside of the current batch of 'anchors'.

For each anchor, we can retrieve the index of a positive example using the AnnoyIndex, but to actually retrieve the data at that index we need an indexable data structure (at least at the moment).

Szubie avatar Feb 12 '20 15:02 Szubie

Hi, we have recently introduced initial support for training on arbitrary out-of-memory datasets using ivis by formalizing the interface that input data must conform to.

Ivis will accept ivis.data.sequence.IndexableDataset instances in its fit, transform and fit_transform methods. An IndexableDataset inherits from collections.abc.Sequence and defines one new method, shape, that takes no arguments and returns the expected shape of the dataset (for example, [rows, columns]).

The collections.abc.Sequence class requires __len__ (returns number of rows) and __getitem__ (returns data at row index) to be implemented. When implementing the __getitem__ method we can customize how the data is retrieved to behave in any way desired.

As an example, we have provided a ivis.data.sequence.ImageDataset class for loading images from disk for reference, which reads image files from disk into memory when indexed.

This is still quite a new feature and we may enhance it based on the feedback we get, so any thoughts on your experience with this would be valued if you end up trying it. We also want to, in time, expand the classes we provide to cover some common use-cases.

Szubie avatar Jan 12 '21 16:01 Szubie

As an update to this issue, support for data stored outside of memory has been improved with the new get_batch method which will be called in preference to __getitem__ if possible. Fetching full batches of data at once can greatly improve performance when running ivis on an out-of-memory data store.

For an example of using ivis on data stored in a sqlite database we've added the following jupyter notebook: https://github.com/beringresearch/ivis/blob/master/notebooks/using_ivis_with_sqlite.ipynb

By using get_batch the SqliteDB class is able to get all the data required for an ivis training step in a single SQL query. The same techniques used in the notebook can be used to adapt any out-of-memory dataset with minimal code.

Closing this issue now as it has now been addressed.

Szubie avatar Jan 11 '23 11:01 Szubie