spaCy icon indicating copy to clipboard operation
spaCy copied to clipboard

Memory usage of `debug-data` with a huge training set

Open sfragis opened this issue 4 years ago • 5 comments

Hi, I'm using Spacy 2.2.2 to train new tagger and parser models for the Italian language. My training data set is quite big (about 2.3 GB for the train and 580 MB for the dev) and is saved in two JSONL files. I'm experiencing an unexpected memory usage when running the debug-data command: memory usage starts low and then grows up to consuming my 32GB of RAM as well as the whole swap (about the same size). Before upgrading my RAM to 128 GB (which I suspect might be useless), I'm interested in your opinion about:

  • hints about data set structure: for instance, comments in issue #4700 suggested to reduce the sentence length on average, but I've no clue about what values might be optimal; is there any rule of thumb to properly dimension the data set?
  • possible optimizations to the source code to reduce memory footprint (for instance by improving the lazy loading of the data set); I'm willing to contribute to Spacy if anyone would kindly point me to the problematic parts (if any, of course)

Info about spaCy

  • spaCy version: 2.2.2
  • Platform: Linux-4.4.0-112-generic-x86_64-with-debian-stretch-sid
  • Python version: 3.7.4

sfragis avatar Dec 03 '19 10:12 sfragis

Thanks for the report!

My training data set is quite big (about 2.3 GB for the train and 580 MB for the dev) and is saved in two JSONL files.

You probably want to split these into multiple files. spaCy can also read from directories instead of single JSON files, so there's really no need to have a 2.3 GB file. This could easily cause other problems down the line.

About debug-data: Since the debug-data command is really mostly a debugging utility, we didn't particularly focus on optimising it for efficiency. For instance, I'm pretty sure we're just loading the whole corpus into memory (e.g. by calling list around it), and I think we're also making at least one additional pass over the data to compute the stats. That's typically okay, because you're usually just running the debugging manually a few times and even if you have to wait for a few minutes, that's not a big deal.

However, if it's not memory-efficient and you can't use it with large data files, that's obviously bad.

We could probably refactor the logic to only process the data as a stream, make one single pass over each corpus and compute all the stats that way. You can find the source here if you want to give it a try and see if it improves things for you: https://github.com/explosion/spaCy/blob/master/spacy/cli/debug_data.py

ines avatar Dec 03 '19 11:12 ines

Hi Ines, thank you for your quick reply. I successfully managed to read the whole dataset from JSONL and have it saved into smaller MessagePack files. The problem may be related to the invocation of GoldCorpus.train_docs where the returned generator is turned into a list as you mentioned. I will try to make the rest of the code more streamy and provide a pull request if I succeed.

sfragis avatar Dec 03 '19 16:12 sfragis

Sorry for the late follow-up, but I just wanted to bump this issue as I still think it's very relevant. Since the PR you last created, the develop branch has been coming together nicely, but I think the same issues with debug data are still present. For instance, we're still calling list(Corpus(train_path)(nlp)).

I wanted to ask you @sfragis whether you have time to rebase your old PR against the new develop branch? If not, I could try and pick those ideas from your old PR and reapply them for a new PR...

svlandeg avatar Aug 20 '20 15:08 svlandeg

Hi Sofie, I'd be happy to contribute but honestly I've no time at all. Feel free to pick code and ideas from my PR and adapt them to the develop branch. Cheers

sfragis avatar Aug 24 '20 09:08 sfragis

Will do, thanks for letting me know!

svlandeg avatar Aug 24 '20 10:08 svlandeg