gensim-data icon indicating copy to clipboard operation
gensim-data copied to clipboard

Historical word embeddings

Open piskvorky opened this issue 7 years ago • 12 comments

…by Stanford, https://nlp.stanford.edu/projects/histwords/

We released pre-trained historical word embeddings (spanning all decades from 1800 to 2000) for multiple languages (English, French, German, and Chinese). Embeddings constructed from many different corpora and using different embedding approaches are included.

Paper: Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change Code: Github License: Public Domain Dedication and License

piskvorky avatar Dec 16 '17 08:12 piskvorky

@piskvorky can you be more concrete, which embeddings need to be added (there are many)?

menshikh-iv avatar Dec 18 '17 13:12 menshikh-iv

All, preferably (and the non-English ones are particularly interesting).

piskvorky avatar Dec 18 '17 15:12 piskvorky

@piskvorky got it!

menshikh-iv avatar Dec 19 '17 09:12 menshikh-iv

@piskvorky problem: each "zip" contains many models named like 1800-w.npy + 1800-vocab.pkl, 1810-w.npy + 1810-vocab.pkl, all this makes sense if we give all the embeddings at once for the user (which is now impossible).

It is probably worth closing this issue (because it does not apply to us)

menshikh-iv avatar Dec 20 '17 10:12 menshikh-iv

I don't understand. What is the problem?

piskvorky avatar Dec 20 '17 10:12 piskvorky

@piskvorky for example, All English (1800s-1990s) (from Google N-Grams eng-all) http://snap.stanford.edu/historical_embeddings/eng-all_sgns.zip

This archive contains many files (pairs of matrix + vocab)

Archive:  eng-all_sgns.zip
sgns/1860-vocab.pkl
sgns/1850-w.npy
sgns/1900-vocab.pkl
sgns/1930-w.npy
sgns/1880-w.npy
sgns/1870-w.npy
sgns/1910-w.npy
sgns/1970-vocab.pkl
sgns/1810-vocab.pkl
sgns/1970-w.npy
sgns/1810-w.npy
sgns/1920-vocab.pkl
sgns/1840-vocab.pkl
sgns/1990-vocab.pkl
sgns/1950-w.npy
sgns/1880-vocab.pkl
sgns/1980-w.npy
sgns/1830-w.npy
sgns/1830-vocab.pkl
sgns/1950-vocab.pkl
sgns/1890-vocab.pkl
sgns/1820-vocab.pkl
sgns/1800-w.npy
sgns/1940-vocab.pkl
sgns/1960-w.npy
sgns/1930-vocab.pkl
sgns/1850-vocab.pkl
sgns/1990-w.npy
sgns/1820-w.npy
sgns/1940-w.npy
sgns/1980-vocab.pkl
sgns/1920-w.npy
sgns/1890-w.npy
sgns/1960-vocab.pkl
sgns/1800-vocab.pkl
sgns/1840-w.npy
sgns/1870-vocab.pkl
sgns/1910-vocab.pkl
sgns/1900-w.npy
sgns/1860-w.npy

i.e. this file contains 20 distinct models (same situation for other links). To use these models for their intended purpose, they are needed all at once (they do not make sense separately). In our case, adding 20 models (which are useless apart from each other) is a very bad idea (moreover, it is extremely inconvenient for the user how to use all at once).

menshikh-iv avatar Dec 20 '17 10:12 menshikh-iv

I see what you mean, but don't see it as a problem. Why couldn't the dataset loader just return a dictionary of models?

piskvorky avatar Dec 20 '17 13:12 piskvorky

You suggest to join all of this to one large pickle (dict of KeyedVectors) and return it to the user, am I right?

menshikh-iv avatar Dec 20 '17 16:12 menshikh-iv

No, I mean a dictionary where the key is a particular model name string (year?) and value the relevant Python object (Word2Vec or whatever).

If, as you say, the models are worthless in isolation, then we should return them all in bulk.

piskvorky avatar Dec 20 '17 17:12 piskvorky

We can store only one gz file for model right now, for this reason, I talked about large pickle before.

menshikh-iv avatar Dec 20 '17 17:12 menshikh-iv

Aha, I see. Yes, that is a possibility -- if the models are sufficiently small, we could pickle everything as a single dict (no separate .npy files etc).

piskvorky avatar Dec 20 '17 18:12 piskvorky

Sorry for exhuming an old issue, but I was wondering if adding these pre-trained historical word embeddings is still under consideration. These would be very valuable to research I am conducting. Thank you.

ResearchLabDev avatar Apr 10 '22 03:04 ResearchLabDev