BLINK icon indicating copy to clipboard operation
BLINK copied to clipboard

Use a smaller model to speed up the prediction time

Open ndenStanford opened this issue 2 years ago • 3 comments

Hello BLINK team,

I have tested the code and it works wonderfully. However, I am noticing that the tutorial only shows a large model and with massive entities.

config = { "test_entities": None, "test_mentions": None, "interactive": False, "top_k": 10, "biencoder_model": models_path+"biencoder_wiki_large.bin", "biencoder_config": models_path+"biencoder_wiki_large.json", "entity_catalogue": models_path+"entity.jsonl", "entity_encoding": models_path+"all_entities_large.t7", "crossencoder_model": models_path+"crossencoder_wiki_large.bin", "crossencoder_config": models_path+"crossencoder_wiki_large.json", "fast": True, # set this to be true if speed is a concern "output_path": "logs/" # logging directory }

Is there a smaller pre-trained model on entity encoding that I can use to speed up the prediction? I am ok to sacrifice some performance. If it's not available, is there anything at all I could do to speed this up?

Thank you

ndenStanford avatar Dec 29 '21 13:12 ndenStanford

I have created a repository for data generation and training of bi-encoder models (so far, only for entity-linking) based on the BLINK model. In it, you can choose which bert base model to use to make your evaluation faster :). As I remember using a bert-mini I could get R@64 of 84% on zeshel dataset.*

But no cross-encoder was implemented, so, you can make only faster the bi-encoder part.

Giovani-Merlin avatar Jan 02 '22 17:01 Giovani-Merlin

@Giovani-Merlin Is that repo still active? The link is now dead.

mitchelldehaven avatar Apr 15 '22 13:04 mitchelldehaven

If you have your own training data, it's not hard to modify the code slightly to use the latest and smaller HuggingFace BERT models (such as BERT mini or google/bert_uncased_L-8_H-512_A-8) for training biencoder.

You'll need to change how the base model is loaded (use HuggingFace's AutoModel and AutoTokenizer classes) and how the tokenized input is fed to the model (input_ids, token_type_ids and attention_mask).

I tried training the Zeshel model after making these changes and training seemed to go on fine.

abhinavkulkarni avatar Apr 19 '22 11:04 abhinavkulkarni