pnlp-mixer icon indicating copy to clipboard operation
pnlp-mixer copied to clipboard

would like to ask where can I download data to train / test pMLP-Mixer?

Open tiendung opened this issue 2 years ago • 7 comments

Didn't find a clue to get the datasets, so I ask here. It's not an issue related to the implementation.

tiendung avatar Feb 27 '22 05:02 tiendung

Hello,

The three datasets I used to evaluate my implementation are the MTOP dataset, the multilingual ATIS dataset, and the IMDB dataset.

You can download the MTOP dataset here. The IMDB dataset can also be easily downloaded here. As for the multilingual ATIS dataset, getting access to the dataset is a bit more difficult; you need to create an LDC account, request the dataset, and wait for the request to approved (this might be a manual process). The multilingual ATIS catalogue page is here.

If you have further questions, feel free to add a comment.

tonyswoo avatar Feb 27 '22 05:02 tonyswoo

I tried several times but still cannot figure out how to run training script on imdb dataset. I got the following error:

t@medu pnlp-mixer % python3 run.py -c cfg/imdb_xs.yml -n imdb_xs -m train
  File "/Users/t/repos/pnlp-mixer/run.py", line 167, in <module>
    data_module = PnlpMixerDataModule(cfg.vocab, train_cfg, model_cfg.projection)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 20, in __init__
    self.tokenizer = BertWordPieceTokenizer(**vocab_cfg.tokenizer)
  File "/usr/local/lib/python3.9/site-packages/tokenizers/implementations/bert_wordpiece.py", line 30, in __init__
    tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(unk_token)))
Exception: Error while initializing WordPiece: No such file or directory (os error 2)t@medu pnlp-mixer %

I download and put imdb dataset at ./data/imdb

t@medu pnlp-mixer % ll ./data/imdb
.rw-r--r-- t staff 826 KB Wed Apr 13 00:14:11 2011  imdb.vocab
.rw-r--r-- t staff 882 KB Sun Jun 12 05:54:43 2011  imdbEr.txt
.rw-r--r-- t staff 3.9 KB Sun Jun 26 07:18:03 2011  README
drwxr-xr-x t staff 224 B  Wed Apr 13 00:22:40 2011  test/
drwxr-xr-x t staff 320 B  Sun Jun 26 08:09:11 2011  train/

Can you give some hints?

tiendung avatar Feb 28 '22 13:02 tiendung

Hi,

Could you show me the configuration file (the .yml file) you are using?

I believe the issue is that the vocab file does not exist in the provided path i.e. the vocab file does not exist at the path provided in vocab.tokenizer.vocab of the configuration file.

If you wish to use the multilingual BERT vocabulary, the file is included in the repo at ./wordpiece/mbert_vocab.txt

tonyswoo avatar Feb 28 '22 13:02 tonyswoo

You are right. I need to change the config to mbert_vocab.txt.

tiendung avatar Feb 28 '22 21:02 tiendung

Sorry to bother again. Now I'm stuck at AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'. I guessed it related to tknz? My config file is https://github.com/telexyz/pnlp-mixer/blob/master/cfg/imdb_xs.yml

  File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 87, in __getitem__
    words = self.get_words(fields)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 109, in get_words
    return [w[0] for w in self.tokenizer.pre_tokenizer.pre_tokenize_str(self.normalize(fields[0]))][:self.max_seq_len]
AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'

tiendung avatar Feb 28 '22 21:02 tiendung

Hi,

Which version of tokenizers are you using?

tonyswoo avatar Mar 02 '22 00:03 tonyswoo

Sorry to bother again. Now I'm stuck at AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'. I guessed it related to tknz? My config file is https://github.com/telexyz/pnlp-mixer/blob/master/cfg/imdb_xs.yml

  File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 87, in __getitem__
    words = self.get_words(fields)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 109, in get_words
    return [w[0] for w in self.tokenizer.pre_tokenizer.pre_tokenize_str(self.normalize(fields[0]))][:self.max_seq_len]
AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'

I have the same problem, the command below will fix it.

pip install tokenizers==0.11.4

zzk0 avatar Nov 03 '22 07:11 zzk0