Yichi Zhang

Results 13 comments of Yichi Zhang

> > Hello. I think mmkb is an outstanding work and I want to follow this work. I found that the image urls of FB15K is released. Could you please...

这个笔记原本是用markdown写成的,latex版本确实比较粗糙,是typora直接生成的,还是先使用原版本的笔记比较合适

Nice work! But I have a question to know. According to the paper and code there are three representations for each graph, which one is actually used for label prediction?...

If the intervention representation is used, how to make the prediction stable due to the random addition?

> Can you provide approximately DB15K of image data? Thank you! Hello, the original data scale is quite large. You can download the original images from https://github.com/quqxui/MMRNS The download link...

是将关系embedding直接自己拼接两倍实现的,比如256维的关系embedding,直接自己和自己concat成512维的向量

> 还想请问一个问题,就是在kopa文件中KoPAWithAdapter类的前向传播代码中的 input_ids: torch.LongTensor = None, attention_mask: Optional[torch.Tensor] = None 这两行代码,在debug到这里的时候,两个都是120维的,我理解的是分词标记,但是它们的前两个总是0(也不一定是两个0,但总是会有几个0的),例如input_ids是[0,0,338,385,……],attention_mask是[0,0,1,1,1……]。这是做了padding吗还是有什么其他操作,代码封装的太好了我找不到它们俩是怎么构造的,还请大佬解答一下 这个应该就是llama自带的padding,保持一个batch中的token数量一致才方便输入transformer进行计算

Can you describe your question more detailedly? I can not understand the meaning of "comparison model".

You mean other baselines? If you train baselines like IKRL/TBKGC on other datasets, I think margin is an important hyper-parameters. You can try tune this parameter to achieve better baseline...

The margin parameter can be 1/2/4/6/8/10/12/24/36 or larger on different datasets. It is a tricky parameter in KGC. Hope this would be helpful.