DFGN-pytorch icon indicating copy to clipboard operation
DFGN-pytorch copied to clipboard

Code for ACL19 paper: Dynamically Fused Graph Network for Multihop Reasoning

Results 9 DFGN-pytorch issues
Sort by recently updated
recently updated
newest added

predictions.json gets stored in submission.

# Patching CVE-2007-4559 Hi, we are security researchers from the Advanced Research Center at [Trellix](https://www.trellix.com). We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a...

您好,首先谢谢您的工作! 在论文中,得到soft mask mt之后,将其与entity state相乘得到 带尾巴的E(t-1),然后使用这个 带尾巴的E(t-1) 去进行GNN的传播。 但是我发现在代码里面,在进行GNN传播时,使用的是 还没有和soft mask(在代码里是adj_mask, 在layers.py的125行得到)相乘的entity state(GNN传播代码在layers.py的127到145行)。在进行完GNN传播之后,才用soft mask去更新entity state(layers.py的148到149行)。请问这个计算顺序是不是有点问题? 谢谢您!

![image](https://user-images.githubusercontent.com/43124010/110723733-76acde80-824f-11eb-8d6a-79fa31c08934.png) ep11 0.5195 0.6570 0.6836 0.6691 0.4739 0.8070 0.8251 0.8201 0.2839 0.5560 0.5893 0.5750 ep12 0.5097 0.6451 0.6779 0.6520 0.4749 0.8023 0.8413 0.7968 0.2783 0.5428 0.5950 0.5458 ep13 0.5228 0.6593...

CUDA_VISIBLE_DEVICES=0,1 python train.py --name=YOUR_EXPNAME --q_update --q_attn --basicblock_trans --bfs_clf 报错: ... loading data/dev_graph.pkl.gz Traceback (most recent call last): File "train.py", line 215 in model.cuda(model_gpu) File "/root/miniconda3/envs/myconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 258, in cuda return...

Thanks for your excellent work. Some 'nan' occur in pkl file downloading from your google drive, but the final result is as good as you've declared. Accutually, I run some...

现在的max_seq_length是512,但是我看原始数据,一个问题下的所有paras拼接起来都是大于512的,按512截断的话,会不会把answer位置给截下来了?我看代码里超过512的只有截断,是吧?对于长度超过512的文本,有没有什么好方法处理一下?

我设置一个更小的gradient_accumulate_step=5,仍然会有OOM的问题,如下。是GPU太小的问题吗?如果我用4个GPU该怎么分配呢,我在config里设置各分配两个,仍然会报错。 GPU +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap|...