vgae_pytorch icon indicating copy to clipboard operation
vgae_pytorch copied to clipboard

Recreating the adjacancey matrix using VGAE cocept

Open akpas001 opened this issue 2 years ago • 4 comments

Hi, I have been trying to recreate the adjacency of my sparse matrix using the same VGAE concept. I am not able to recreate the adjacency matrix. Do you think there is any preprocessing is necessary for such sparse graphs? Please let me know. I am attaching the data and code for your reference. I am also attaching the results I am able to reproduce using this code. Please feel free to go through the code and suggest necessary changes. Thank you!

P.S: The graphs are unidirectional. And do not have self-loops as well.

[states.zip](https://github.com/Daehan adjacency_pred_vgae .txt Kim/vgae_pytorch/files/8883726/states.zip)

image (4) image (3) image (2) image (1)

akpas001 avatar Jun 11 '22 11:06 akpas001

What is the purpose of reconstructing your adjacency matrix? Since you have a sparse graph, there are not much training signals for some nodes, resulting in inaccurate edge reconstruction. Maybe you need some auxiliary approaches to model your dataset.

DaehanKim avatar Jun 15 '22 15:06 DaehanKim

I am trying to create a custom policy for my reinforcement learning agent to train with. i am generating this data from my reinforcement learning environment. what kind of auxilary approaches should i be using? can you throw some light on them?

akpas001 avatar Jun 18 '22 18:06 akpas001

Why don't you use true adjacency matrix as a reward signal, instead of reconstructing it? I don't have much to tell about auxiliary approaches since I have no clue on your task. Can you elaborate more on that?

DaehanKim avatar Jun 22 '22 04:06 DaehanKim

So, what exactly I am doing is training the encoder to be a feature extractor using VGAE concept and going to use the trained the encoder as the feature extractor for my custom policy network. and train this custom policy using a Reinforcement Learning agent. In order to achieve that my Variational Auto Encoder should work, which is not happening in my case.

The env returns the reward based on the next_state predicted, I cannot feed the next_state itself as a reward signal to it. Earlier I was feeding the state and action to the network to predict the next_state and reward as well, but I faced the same issue with that as well where, I cannot achieve the similar next_state when a certain action is fed to the step function of the environment. So, as an alternative approach I am trying this

akpas001 avatar Jun 22 '22 16:06 akpas001