emnlp2017-bilstm-cnn-crf icon indicating copy to clipboard operation
emnlp2017-bilstm-cnn-crf copied to clipboard

Tips for training MTL on large dataset

Open negacy opened this issue 5 years ago • 7 comments

Are there tips on how to train MLT model on large datasets that have millions of trainable parameters. I am trying to train this on 1TB memory of machine but still facing memory limit.

Thanks.

negacy avatar Feb 27 '19 20:02 negacy

How large are your train/dev/test datasets (in terms of size). The architecture loads the complete datasets into memory. If they are too large, your machine will crash. You then need to change the code so that the data is streamed from disk and not read into memory.

If your datasets are small (say, smaller than 10 GB), the issue is somewhere else.

nreimers avatar Feb 27 '19 21:02 nreimers

The dataset is small, less than 3MB per task. I have seen the training failing due to memory limit for any model that has more than 1 million trainable parameters. The training goes smoothly for models that have less than 1 million trainable parameters.

negacy avatar Feb 27 '19 22:02 negacy

That is strange. How many tasks are you training?

It should be no issue to train with more than 1 million parameters, even with much smaller memory. I personally have about 16 GB of RAM and training runs smoothly on larger networks with datasets.

Are you using Python 3.6 (or newer) and a recent Linux system?

nreimers avatar Feb 27 '19 22:02 nreimers

Yes, I am using Python 3.6 on CentOS version 7. I am having this issue even in two tasks.

negacy avatar Feb 27 '19 23:02 negacy

I sadly don't have an idea why this could be the case. It should work fine.

You could also test this implementation: https://github.com/UKPLab/elmo-bilstm-cnn-crf

It works similar to this repository, but it also allows to use ELMo representation. Maybe there this issue does not happen?

nreimers avatar Feb 28 '19 12:02 nreimers

Still the same issue even with the elmo implementation. Here is the error: Training: 0 Batch [00:00, ? Batch/s]/tmp/slurmd/job1924456/slurm_script: line 18: 21081 Segmentation fault (core dumped) python Train_multitask.py

negacy avatar Feb 28 '19 14:02 negacy

Is python actually allocating that much memory? Maybe the OS imposes some limit on the memory / heap / stack size, so that the scripts crashes even if only e.g. 4 GB RAM are allocated.

Maybe this thread helps: https://stackoverflow.com/questions/10035541/what-causes-a-python-segmentation-fault

nreimers avatar Mar 01 '19 12:03 nreimers