GRACE icon indicating copy to clipboard operation
GRACE copied to clipboard

question about hidden dim

Open downeykking opened this issue 1 year ago • 0 comments

In your implementation setting, such as in Cora, hidden dim = 128, but in your code, you double it to 2 * out_channels, is this reasonable? Apparently the current dimension is 256 in your code.

class Encoder(torch.nn.Module):
    def __init__(self, in_channels: int, out_channels: int, activation,
                 base_model=GCNConv, k: int = 2):
        super(Encoder, self).__init__()
        self.base_model = base_model

        assert k >= 2
        self.k = k
        self.conv = [base_model(in_channels, 2 * out_channels)]
        for _ in range(1, k-1):
            self.conv.append(base_model(2 * out_channels, 2 * out_channels))
        self.conv.append(base_model(2 * out_channels, out_channels))
        self.conv = nn.ModuleList(self.conv)

        self.activation = activation

    def forward(self, x: torch.Tensor, edge_index: torch.Tensor):
        for i in range(self.k):
            x = self.activation(self.conv[i](x, edge_index))
        return x

downeykking avatar Sep 27 '22 04:09 downeykking