Pengyi Li

Results 17 comments of Pengyi Li

I use my own data , and use `adj_train = adj[train_index, :][:, train_index]`, `normADJ_test = nontuple_preprocess_adj(adj[train_test_idnex,:][:,train_test_idnex])` `testSupport = sparse_to_tuple(normADJ_test[len(train_index):, :]),` I don't change the important code in the FastGCN I...

I want to deal with the (2) question ,so I add vocab_size to reindex the correct test_index ,but fail to get a good ac, it's like to random choose the...

hi, I am very sorry for my late reply。The experiment I did used the following code:I may have a problem understanding the nontuple_preprocess_adj function, I did‘nt change the nontuple_preprocess_adj function.is...

Hi My experimental results were also satisfactory, but I have a question. I have read a lot of papers on GCN. If I want to embed GCN into an end-to-end...

Hi I have read the paper you provided,But I don't think that's the core of my concern.It is very similar to FastGCN.I hope we can discuss the related knowledge again。...

Hi ,All of a sudden, there were some other things that needed to be dealt with, so I didn't reply for a long time. I'm really sorry for that. But...

I found a mistake , that is the batch_size is too big ,so I change it to 20 ,and the result is better. as follow: train process : mean_loss: 1.47825873...