BERT-pytorch
BERT-pytorch copied to clipboard
Making Book Corpus
Building the same corpus with original paper. Please share your tips to preprocess and download the file. It would be great to share preprocessed data using dropbox or google drive etc.
#32
The original paper (BERT) use "the concatenation of BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words)." what do you mean "Movie Corpus"?
@mapingshuo Sorry It's my fault. haha I just made that title in 5seconds :) thank you!! 👍
That's okay, I am looking for a valid Book Corpus too.
Both GPT and BERT were trained on bookscorpus. Presumably there's a private copy people are passing about. There's some web scrapers out there designed for recreating the bookscorpus but this repetition of work seems unnecessary. If anyone finds a copy, do let me know!