Hyper-SAGNN
Hyper-SAGNN copied to clipboard
problem of training both models of classifier_model annd Randomwalk_Word2vec
Hello, I am very admiring your work.But I notice last line of main.py can train both model of classifier_model and Randomwalk_Word2vec but setting loss=(loss, 1.0), (loss2, 0.0) can only train model of classifier_model, right?
Then here is my question, setting loss=(loss, 1.0), (loss2, 1.0), code got error:
And I want to know whether only one model of classifier_model perform better or both model?
Hi, Thank you for your interest. Yeah, with the newer version of tensorflow, I stopped maintaining the code for the random walk part. The answer is that if you use the model with the adj mode, then just the classifier_model would work well. If it's random walk based, then that part of the loss would also be preferred. A quick fix, might just be changing line 132 of main from
example_emb = model.forward_u(examples)
to
example_embed, _ = model.forward_u(examples)
In addition, main_torch.py get rid of the dependencies of tensorflow 1.0 which would be slightly more up to date (while losing support for the random walk part of the model)
Still can't run, got this error, have you tried?
Then I am interested in your new version removing random walk, is it because with the adj mode can show best result or training both models(classifier_model annd Randomwalk_Word2vec) improves little?
the latter. training both models while offer some advantages when we did the benchmarking in the paper. But in some later applications of the model to other datasets, I found that training one model is good enough on its own.