datasets icon indicating copy to clipboard operation
datasets copied to clipboard

`load_dataset` fails to load dataset saved by `save_to_disk`

Open sliedes opened this issue 1 year ago • 6 comments

Describe the bug

This code fails to load the dataset it just saved:

from datasets import load_dataset
from transformers import AutoTokenizer

MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)

dataset = load_dataset("yelp_review_full")

def tokenize_function(examples):
    return tokenizer(examples["text"], padding="max_length", truncation=True)

tokenized_datasets = dataset.map(tokenize_function, batched=True)
tokenized_datasets.save_to_disk("dataset")

tokenized_datasets = load_dataset("dataset/")  # raises

It raises ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}.

I believe this bug is caused by the logic that tries to infer dataset format. It counts the most common file extension. However, a small dataset can fit in a single .arrow file and have two JSON metadata files, causing the format to be inferred as JSON:

$ ls -l dataset/test
-rw-r--r-- 1 sliedes sliedes 191498784 Jul  1 13:55 data-00000-of-00001.arrow
-rw-r--r-- 1 sliedes sliedes      1730 Jul  1 13:55 dataset_info.json
-rw-r--r-- 1 sliedes sliedes       249 Jul  1 13:55 state.json

Steps to reproduce the bug

Execute the code above.

Expected behavior

The dataset is loaded successfully.

Environment info

  • datasets version: 2.20.0
  • Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39
  • Python version: 3.12.4
  • huggingface_hub version: 0.23.4
  • PyArrow version: 16.1.0
  • Pandas version: 2.2.2
  • fsspec version: 2024.5.0

sliedes avatar Jul 01 '24 12:07 sliedes

In my case the error was:

ValueError: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.

Did you try load_from_disk?

happyTonakai avatar Jul 23 '24 09:07 happyTonakai

More generally, any reason there is no API consistency between save_to_disk and push_to_hub ?

Would be nice to be able to save_to_disk and then upload manually to the hub and load_dataset (which works in some situations but not all)...

ManuelFay avatar Aug 05 '24 09:08 ManuelFay

I have the exact same problem !

kfarivar avatar Dec 03 '24 11:12 kfarivar

load_from_disk managed to load the dataset, but the bug with load_dataset needs to be fixed.

kfarivar avatar Dec 03 '24 11:12 kfarivar

any update ? I need some function from load dataset like num_proc, or streaming to optimize ram, ... but got this error

anhalu avatar May 24 '25 05:05 anhalu

I found that the very hard way. When each split has more than one arrow file they are marked correctly. When there is only one, json file wins. So adding the .arrow format in the sorter does the trick.

gstrat88 avatar Nov 11 '25 05:11 gstrat88