Wayne Yuan

Results 7 comments of Wayne Yuan

@echoht 请问那如果使用 p-tuning 如何设置呢? 我看paper里面 ptuning embedding 也有 soft-prompt 吗

哈哈哈哈哈 这是知识搅拌机吗

您好,非常感谢您的解答,我说的continual training是指在您训练好的bge-large-zh的基础上进行pre-training,想问问这里【如果下游没有很多的标签样本对】下游大概需要多少数据样本才使用pre-training比较合适呢?以及pre-training的样本需要多大合适呢? 另外对于问题2的图片如下(之前没有uploading完成,抱歉~): ![performance_metrics](https://github.com/FlagOpen/FlagEmbedding/assets/63044447/bff00264-9f1b-47e4-a55d-56d22fbee084)

Thanks for the reply. My understanding is that because the model is trained with 300 tokens, if we change the input length, for example, to 500, the effect may be...

> > Thanks for the reply. My understanding is that because the model is trained with 300 tokens, if we change the input length, for example, to 500, the effect...