JinFish
JinFish
Oh, this needs to be over the wall, the problem has been solved. I thought that this data is the feature of the processed image, that is, the extracted object...
ok, thanks.
Thank you for your reply, but I would like to ask why the results of table 2 and table 4 are not consistent and the results of table 4 and...
Thank you very much!
这个方法的目的是为了将文本分割为一个个token,但实际上在数据的预先处理中就已经将文本分割为token了,这一步实属没有必要。 而且这个方法不能接收list类型,只能接受str类型。
代码里 token [SEP]对应的是O标签,当然找不到[SEP]标签。 tokens += [sep_token] label_ids += [label_map['O']] segment_ids = [sequence_a_segment_id] * len(tokens)
Therefore, the data set under the data directory in your GitHub is the real text data set and is complete, right?
Thank you for your patience. If I have any other questions, I will contact you again.