Graph-WaveNet icon indicating copy to clipboard operation
Graph-WaveNet copied to clipboard

Some questions for implemention of gcn in model.py.

Open guokan987 opened this issue 4 years ago • 4 comments

Hi, I have a question: we utilze GCN with AXW, but in your model.py, I find it become WXA, Why?

guokan987 avatar May 31 '20 08:05 guokan987

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()

CYBruce avatar Jul 01 '20 02:07 CYBruce

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()

I think that X's size is nxcxvxl, and in the normal(AX) input X's is nxvxcxl, in this place, X in fact is X.transpose, so it exchange A and X location in matrix multiply. But the coffuse place is (AX).tranpose=X.tranposeA.tranpose. In code, there is X.tranposeA. from the traffic graph, A is in-degree direction, A.tranpose is out-degree direction. But, in paper , authors proposed a diffusion gcn: it conclude A and A.transpose. so it look like correct in diffusion-gcn. However the normalization of A should be conducted in column not row in util.py(dims in asym_adj() should be 0, not -1). so I think there is two kinds ways to solve this confusion: 1.as the above, normalzation is conducted in column; 2. revise the code 'x = torch.einsum('ncvl,vw->ncwl',(x,A))' to 'A=A.tranpose(-1,-2) and x = torch.einsum('ncvl,vw->ncwl',(x,A))'

guokan987 avatar Jul 04 '20 01:07 guokan987

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()

I think that X's size is nxcxvxl, and in the normal(AX) input X's is nxvxcxl, in this place, X in fact is X.transpose, so it exchange A and X location in matrix multiply. But the coffuse place is (AX).tranpose=X.tranpose_A.tranpose. In code, there is X.tranpose_A. from the traffic graph, A is in-degree direction, A.tranpose is out-degree direction. But, in paper , authors proposed a diffusion gcn: it conclude A and A.transpose. so it look like correct in diffusion-gcn. However the normalization of A should be conducted in column not row in util.py(dims in asym_adj() should be 0, not -1). so I think there is two kinds ways to solve this confusion: 1.as the above, normalzation is conducted in column; 2. revise the code 'x = torch.einsum('ncvl,vw->ncwl',(x,A))' to 'A=A.tranpose(-1,-2) and x = torch.einsum('ncvl,vw->ncwl',(x,A))'

it looks like the author use the same weight by a mlp after diffusion gcn. i think it not accord with formula (6) or (7),which has k layers and each layer has a unique weight.

def forward(self,x,support):
        out = [x]
        for a in support:
            x1 = self.nconv(x,a) #AX
            out.append(x1)
            for k in range(2, self.order + 1):# k in (2,3)
                x2 = self.nconv(x1,a)
                out.append(x2)
                x1 = x2

        h = torch.cat(out,dim=1)
        h = self.mlp(h) #AXW

wanzhixiao avatar Jan 11 '21 08:01 wanzhixiao

@guokan987 Hi! I'm confused about this code? Does it mean AX?

class nconv(nn.Module):
    def __init__(self):
        super(nconv,self).__init__()

    def forward(self,x, A):
        x = torch.einsum('ncvl,vw->ncwl',(x,A))
        return x.contiguous()

I think that X's size is nxcxvxl, and in the normal(AX) input X's is nxvxcxl, in this place, X in fact is X.transpose, so it exchange A and X location in matrix multiply. But the coffuse place is (AX).tranpose=X.tranpose_A.tranpose. In code, there is X.tranpose_A. from the traffic graph, A is in-degree direction, A.tranpose is out-degree direction. But, in paper , authors proposed a diffusion gcn: it conclude A and A.transpose. so it look like correct in diffusion-gcn. However the normalization of A should be conducted in column not row in util.py(dims in asym_adj() should be 0, not -1). so I think there is two kinds ways to solve this confusion: 1.as the above, normalzation is conducted in column; 2. revise the code 'x = torch.einsum('ncvl,vw->ncwl',(x,A))' to 'A=A.tranpose(-1,-2) and x = torch.einsum('ncvl,vw->ncwl',(x,A))'

it looks like the author use the same weight by a mlp after diffusion gcn. i think it not accord with formula (6) or (7),which has k layers and each layer has a unique weight.

def forward(self,x,support):
        out = [x]
        for a in support:
            x1 = self.nconv(x,a) #AX
            out.append(x1)
            for k in range(2, self.order + 1):# k in (2,3)
                x2 = self.nconv(x1,a)
                out.append(x2)
                x1 = x2

        h = torch.cat(out,dim=1)
        h = self.mlp(h) #AXW

应该没问题,这里应该是将K层的特征在特征维度上拼接成一个tensor,从而对这个Tensor 进行MLP映射,完成公式内容。

guokan987 avatar Jan 15 '21 16:01 guokan987