Yang An
Yang An
如题,请问几个tnews 1.1版本测试集相关的问题: 1. 有没有BERT-base, BERT-wwm-ext, ERNIE-base, RoBERTa-large, XLNet-mid, ALBERT-base, ALBERT-large, ALBERT-xlarge, ALBERT-xxlarge, ALBERT-tiny, RoBERTa-wwm-ext, RoBERTa-wwm-large这些对应baseline的测试集分数呢? 2. 为什么测试集需要从1.0更新到1.1呢?我观察到1.0测试集分数普遍高于1.1,这个主要是因为什么呢 3. 1.1版本测试集sample没有了keyword,这个是什么考虑呢 @brightmart 希望主办方能够麻烦解答,十分感谢!
Hi, thank you for releasing the great work! I am working on the VQA task. May I ask where can I find the annotation file `test2015_qla_mrcnn.json` and `test-dev2015_qla_mrcnn.json` to make...
Add unconstrained training for VQA, which does not need a pre-defined candidate answer set for both finetuning and inference. In this case, evaluation inference mode must be `beamsearch` rather than...
## 推荐项目 - 项目地址:https://github.com/OFA-Sys/Chinese-CLIP - 类别:Python - 项目标题:OpenAI CLIP模型中文预训练版本,几行代码实现中文图文特征&图文检索 - 项目描述:大家好我们是达摩院OFA-Sys团队,欢迎在github试用我们的Chinese-CLIP图文预训练模型项目(https://github.com/OFA-Sys/Chinese-CLIP ),该项目是OpenAI CLIP模型的中文版本。我们使用大量互联网图文信息进行预训练(~2亿中文原生图文数据),提供了多个规模的预训练模型和技术报告,使对机器学习感兴趣的初学者,能几行代码完成中文图文特征提取和图文检索。在近期多个图文检索评测比赛(“兴智杯”全国人工智能创新应用大赛、天池电商多模态图文检索挑战赛)上,基于Chinese-CLIP的模型都取得榜首成绩!**希望大家多多试用 & 多多star!** - 亮点:我们实现的中文版本CLIP在多个公开数据集上取得杰出的效果,基本超出市面同类型baseline图文表征和检索模型。上手门槛非常低,几行代码就可以完成中文图文特征提取和图文检索,打比赛做项目都非常给力,持续保持更新和维护当中! - 示例代码: ```python import torch from PIL import Image import cn_clip.clip as clip...
Hello. I am very interested in question generation and I think this paper tackles an important issue on this task. Your release of generated QA dataset is helpful for inspecting...
Hi, I have read your new paper "12-in-1: Multi-Task Vision and Language Representation Learning" on Arxiv, which utilizes multi-task fine-tuning to boost the performance of Vil-BERT. May I ask whether...