VillardX
VillardX
Hi, thanks for your great work! I am from your subsequent work VPT and have visited issues about how to estimate scale and shift to transform model prediction relative depth...
作者您好,我的自用数据集是长文本,每个样本的文本长度大概是2000字,且只包含flat的实体。大概有2000条数据,共计约2万个实体标记,实体类别为9类,原生的bert-base-chinese只支持max_len=512,所以我对自己的文本数据进行了max_len=500的截断,并使用resume_zh.json的设置参数进行训练,仅修改了batch_size=4(不然爆显存),但是最终效果f1=0.75,甚至比BERT+CRF的baseline都低。想向您请教一下,是哪里的参数设置出问题了呢,请给个指导思路,不胜感激,谢谢~
Hi @Caoang327 , would you release the FWD-U pretrained model please? Thanks for your great job!
Hi, thanks for your great work. It really works well on my own dataset with your pretrained model. Here I still have some questions. 1. The readme provides the preprocess.py...
作者您好,学习了您的代码,demo_imgs.py和train_sterep.py 我从输入和输出上理解是输入(left_img, right_img),输出left_disparity,即只输出left_disparity。 - 请问该模型是否有能力合理输出右图的视差的能力?即如果输入(lright_img, left_img),输出right_disparity,按照视差定义来说。输出的right_disparity应该都是负值。但我使用了您提供的sceneflow.pth,并调换了您提供的Motorcycle示例图的左右顺序,输出的right_disparity效果明显变差,且输出的视差图tensor都是正数,不符合视差定义。不知道是否需要修改哪里的代码。 - 如果需要重新训练,我手边是有左视图和右视图GT_disp的数据的,不知道如何修改网络结构,使得该网络能够同时训练左和右视图的视差。即做到输入(right_img, left_img),能够合理输出right_disp。 谢谢~
Thank you for your great work! In my situations, I want to train dust3r from scratch or finetune it on my own data. You provide the Co3d preprocessed data demo....