liuslab
Results
2
comments of
liuslab
> FlagEmbedding\baai_general_embedding\finetune\data.py 73line def padding_score(self, teacher_score): group_size = None for scores in teacher_score: if scores is not None: group_size = len(scores) break if group_size is None: return None > >...
场景是有大批量的文本句需要推理emb,用于构建向量数据库 看到有转onxx等的建议,如果直接使用FlagEmbedding库,如何更好的开启batch_size等可以提高推理效率? 看到这里似乎有实践,但是不知道能否再出一个示例代码,方便直接调用FlagEmbedding的方式再提高推理效率 https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/search_demo/pre_process.py