hexiaoyupku

Results 4 comments of hexiaoyupku

I tried using multiprocessing to solve this problem. Each process load a different model.

This is how I implement it: 1. Init num_gpu models. The device_name is like '/device:GPU:0', '/device:GPU:1'. ``` graph = tf.Graph() with graph.as_default(): with tf.device(device_name): config = tf.ConfigProto(allow_soft_placement=False) self.sess = tf.Session(config=config)...

I fixed my problem by modifying the tokenizer. The tokenizer UDA used is not consistent with BERT pretrain model for Chinese. Before: ``` def tokenize_to_wordpiece(self, tokens): split_tokens = [] for...

Hi rbhatia46, have you found a solution ? I'm facing the same problem, it will be great help! Thanks.