ChineseBert
ChineseBert copied to clipboard
how to run inference on a new question
This seems to return embedding for input question and answer. How can we pass a random question and generate an answer?
As the BERT is mainly focused on generating the embedding for the input sequence, so you can just put the question to the model and then get its embedding.
what do you do after you get the embedding? is it just used as sentence/word vectors for downstream tasks?