Xu Ding
Xu Ding
训练结果
大神你好,跟着你代码学了很多东西,基本都star了遍 我想问一下你这个跑出来的COCO mAP比论文提供的结果好像要差点,这是啥原因啊
楼主,你好,我跑tinybert,在iflytek数据集上效果非常差(如下结果),请问一下可能是哪方面的原因? 训练配置如下:未加数据增强,max_seq_length=128,train_batch_size=32,learning_rate=5e-5 02/10 07:18:14 PM ***** Eval results ***** 02/10 07:18:14 PM acc = 0.0011547344110854503 02/10 07:18:14 PM acc_and_f1 = 0.0005870596638409233 02/10 07:18:14 PM att_loss = 0.0 02/10 07:18:14 PM...
大佬你好,我用https://github.com/YunwenTechnology/Unilm 提供的微博新闻摘要数据(从中随机挑选10000篇作为训练集,1000篇作为测试集)测试了下GPT2,发现rouge-1只有不到20%,而UniLM给出的结果有40.58%,请问这大概是什么原因?是GPT2的效果就是不好吗
楼主你好,感谢分享,代码中有个问题请教:如果加载的config和vocab对应的vocab_size=13317,但是模型预加载预训练模型,岂不是vocab_size冲突了,请问应该如何处理 model_config = GPT2Config.from_json_file(args.config_path) if args.pretrained_model_path: model = GPT2LMHeadModel.from_pretrained(args.pretrained_model_path) else: model = GPT2LMHeadModel(config=model_config) tokenizer = BertTokenizer.from_pretrained(args.vocab_path, do_lower_case=True)
can you provide the pretrained model? thanks
hi, can you provide a BaiDu link about pretrained model, I cannot open the Google link
(FR1) [root@localhost src]# python demo.py ctdet --demo ../images --load_model ../models/ctdet_coco_dla_2x.pth Traceback (most recent call last): File "demo.py", line 11, in from detectors.detector_factory import detector_factory File "/users_4/d00477216/4_CenterNet-master/src/lib/detectors/detector_factory.py", line 5, in from...
### 请提出你的问题 我在运行OCR检测时(代码位置:PaddleNLP/applications/document_intelligence/doc_vqa/OCR_process/ocr_process.py)出现以下问题,请问是什么原因? [2022-10-26 16:30:36,060] [ INFO] - Already cached /root/.paddlenlp/models/layoutxlm-base-uncased/sentencepiece.bpe.model [2022-10-26 16:30:36,584] [ INFO] - tokenizer config file saved in /root/.paddlenlp/models/layoutxlm-base-uncased/tokenizer_config.json [2022-10-26 16:30:36,584] [ INFO] - Special tokens file...
running install running bdist_egg running egg_info writing pointnet2.egg-info/PKG-INFO writing dependency_links to pointnet2.egg-info/dependency_links.txt writing top-level names to pointnet2.egg-info/top_level.txt reading manifest file 'pointnet2.egg-info/SOURCES.txt' writing manifest file 'pointnet2.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg...
hello, thanks for your wonderful work. I have a question about the train stage. I use GPT2 (117M) to finetune on my own datasets, the GPT2 model size is about...