CogQA
CogQA copied to clipboard
Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Bumps [ujson](https://github.com/ultrajson/ultrajson) from 1.35 to 5.4.0. Release notes Sourced from ujson's releases. 5.4.0 Added Add support for arbitrary size integers (#548) @JustAnotherArchivist Fixed CVE-2022-31116: Replace wchar_t string decoding implementation with...
Model name 'bert-base-uncased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz' was a path or url but couldn't find any file...
作者老师您好!再次冒昧打扰了!我现在是24g显存的显卡,但是内存只有32g,有条件加到64g。 目前sys1和sys2的训练速度大概在5-6小时,请问加内存可以加快速度吗?
作者老师您好!有个bug我刚开始复现的时候就出现了,后来跑着跑着自己没了,现在换了模型又出现了。 Traceback (most recent call last): File "/home/shaoai/CogQA/train.py", line 337, in fire.Fire(main) File "/home/shaoai/anaconda3/envs/mypytorch/lib/python3.6/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/home/shaoai/anaconda3/envs/mypytorch/lib/python3.6/site-packages/fire/core.py", line 366, in _Fire...
作者老师您好!我在改进代码模型的时候尝试将bert换成albert 我把 BERT_MODEL = 'bert-base-uncased' tokenizer = BertTokenizer.from_pretrained(BERT_MODEL, do_lower_case=True) 换成了 tokenizer = BertTokenizer.from_pretrained("./albert_base") BERT_MODEL = BertModel.from_pretrained("./albert_base") 然后报错: File "train.py", line 158, in main bundles.append(convert_question_to_samples_bundle(tokenizer, data)) File "/home/shao/CogQA/data.py", line 187, in...
大佬您好,我看到sem[x,Q,clues]在您论文中的系统一和系统二都有体现,想明白sem是什么意思?具体体现在: 1. 系统一--对于答案节点x, Para[x]可能丢失。因此不提取跨度,而是基于”句子A”部分来计算sem[x,Q,clues]; 2. 系统二:为了完全理解实体x和问题Q之间的关系,仅仅通过分析sem[x,Q,clues]远远不够; 请问可以解释下这两句话的含义吗? (我之前以为工作的流程是:通过bert抽取得到下一跳实体,再由下一跳实体在gnn中游走推理,进而再传给bert,但是发现多了个sem) 希望自己能够表述清楚~·······································`·
dump.rdb
Hi, Why didn't I find this file “dump.rdb”. Thx u ve much.
In the process of train, encountered such a mistake, where is the problem? "RuntimeError: CUDA error: an illegal memory access was encountered"
Just curious, is it common practice to add extra embedding to BERT but keep using original encoder parameters?