datasets
datasets copied to clipboard
Feature request: IterableDataset.push_to_hub
Feature request
It'd be great to have a lazy push to hub, similar to the lazy loading we have with IterableDataset.
Suppose you'd like to filter LAION based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming:
from datasets import load_dataset
dataset = load_dataset("laion/laion400m", streaming=True, split="train")
Then you could filter the dataset based on certain conditions:
filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400)
In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push:
from datasets import Dataset
Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...)
It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size:
filtered_dataset.push_to_hub("my-filtered-dataset")
Motivation
This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk.
Your contribution
Happy to test out a PR :)
+1
+1
+1, should be possible now? :) https://huggingface.co/blog/xethub-joins-hf
Haha we're working hard to integrate Xet in the HF back-end, it will enable cool use cases :)
Anyway about IterableDataset.push_to_hub, I'd be happy to to provide guidance and answer questions if anyone wants to start a first simple implementation of this
+1
+1
+1
+1
Currently running into this when filtering Common Corpus for Dutch entries.
Extra points for somehow making it resumable on error. 11 TB is a lot of data to stream on a home connection without encountering any sort of errors along the way.
If it can help, IterableDataset does implement .state_dict() and .load_state_dict() that you can use to resume a stream already
+1
+1
+1
Just added a first implementation for IterableDataset.push_to_hub() :)
I'll do a new release soon, in the meantime feel free to install datasets from source to try it out !