tensorboardX icon indicating copy to clipboard operation
tensorboardX copied to clipboard

Using add_graph multiple times?

Open aymenmir94 opened this issue 5 years ago • 7 comments

If I use add_graph multiple times, the writer overwrites all the previous networks graphs and displays only the last graph. Is this the expected behaviour?

writer = Summarywriter() writer.add_scalars(..) .. .. writer.add_graph(net1, data1) writer.add_graph(net2, data2)

Is there a way to use the same writer to display multiple network graphs

aymenmir94 avatar Jan 08 '19 20:01 aymenmir94

Yes, that is expected, based on the assumption that there is no need to save the graph multiple times. Do you have a dynamic graph structure?

lanpa avatar Jan 09 '19 11:01 lanpa

No. I am trying to save both discriminator and generator networks in the same run.

aymenmir94 avatar Jan 09 '19 11:01 aymenmir94

Any solution so far? trying to do the same

segalinc avatar Feb 14 '19 01:02 segalinc

Hi, all. I made some attempt to achieve this feature. The best outcome is one run with multiple selectable steps. But steps other than the last recorded step will be marked as outdated. Because of tensorboard implementation. Although this behaviour can be changed from tensorboard's source code, I rather want to keep tensorflow/tensorboard intact. A simple solution would be wrapping the D and G model with a sequential layer.

lanpa avatar Feb 22 '19 01:02 lanpa

Hi, I'm also trying to do this. @lanpa how do I perform the first solution you suggested? Or is there an option to utilize the 'runs' or 'session runs' functionality in Tensorboard? My goal is to output many graphs in one "view" of Tensorboard (running many NN archs in one experiment). Thanks

erap129 avatar May 19 '19 12:05 erap129

Maybe not the most elegant solution but I currently use the following workaround for displaying both the actor and critic of the SAC architecture in one graph plot:

Work around

  1. Make sure that the networks you want to display (in my case the actor and critic), are classed based (meaning they inherit from the torch.nn.Module class).
  2. Create a wrapper class that also inherits from the torch.nn.Module class.
  3. Add both network 1 and network 2 as attributes to the wrapper class.
  4. Define a forward function in the wrapper title which calls the forward functions of both network 1 and network 2 and following returns the results.
  5. Now use the wrapper class in your add_graph call.

Example Code

from tensorboardX import SummaryWriter
import torch
import torch.nn as nn

import gym


def mlp(sizes, activation, output_activation=nn.Identity):
    """Create a multi-layered perceptron using pytorch."""
    layers = []
    for j in range(len(sizes) - 1):
        act = activation if j < len(sizes) - 2 else output_activation
        layers += [nn.Linear(sizes[j], sizes[j + 1]), act()]
    return nn.Sequential(*layers)


class Network1(nn.Module):
    def __init__(self, obs_dim, act_dim, hidden_sizes, activation):
        super().__init__()
        self.q = mlp([obs_dim + act_dim] + list(hidden_sizes) + [1], activation)

    def forward(self, obs, act):
        q = self.q(torch.cat([obs, act], dim=-1))
        return torch.squeeze(q, -1)  # Critical to ensure q has right shape.


class Network2(nn.Module):
    def __init__(self, obs_dim, act_dim, hidden_sizes, activation):
        super().__init__()
        self.q = mlp([obs_dim + act_dim] + list(hidden_sizes) + [1], activation)

    def forward(self, obs, act):
        q = self.q(torch.cat([obs, act], dim=-1))
        return torch.squeeze(q, -1)  # Critical to ensure q has right shape.


class Wrapper(nn.Module):
    def __init__(
        self,
        observation_space,
        action_space,
        hidden_sizes=(256, 256),
        activation=nn.ReLU,
    ):
        super().__init__()

        obs_dim = observation_space.shape[0]
        act_dim = action_space.shape[0]

        # build policy and value functions
        self.net1 = Network1(obs_dim, act_dim, hidden_sizes, activation)
        self.net2 = Network1(obs_dim, act_dim, hidden_sizes, activation)

    def forward(self, obs, act):

        # Perform a forward pass through all the networks and return the result
        q1 = self.net1(obs, act)
        q2 = self.net2(obs, act)
        return q1, q2


if __name__ == "__main__":

    # Create tensorboard writer
    writer = SummaryWriter("runs/exp-1")

    # Create environment
    env = gym.make("Pendulum-v0")

    # Create a wrapper network
    wrapper = Wrapper(
        observation_space=env.observation_space,
        action_space=env.action_space,
        hidden_sizes=(256, 256),
        activation=nn.ReLU,
    )

    # Add combined graph to tensorboard
    writer.add_graph(
        wrapper,
        (
            torch.Tensor(env.observation_space.sample()),
            torch.Tensor(env.action_space.sample()),
        ),
    )

    # Close tensorboard writer
    writer.close()

rickstaa avatar Aug 22 '20 16:08 rickstaa

Maybe not the most elegant solution but I currently use the following workaround for displaying both the actor and critic of the SAC architecture in one graph plot:

Work around

  1. Make sure that the networks you want to display (in my case the actor and critic), are classed based (meaning they inherit from the torch.nn.Module class).
  2. Create a wrapper class that also inherits from the torch.nn.Module class.
  3. Add both network 1 and network 2 as attributes to the wrapper class.
  4. Define a forward function in the wrapper title which calls the forward functions of both network 1 and network 2 and following returns the results.
  5. Now use the wrapper class in your add_graph call.

Example Code

from tensorboardX import SummaryWriter
import torch
import torch.nn as nn

import gym


def mlp(sizes, activation, output_activation=nn.Identity):
    """Create a multi-layered perceptron using pytorch."""
    layers = []
    for j in range(len(sizes) - 1):
        act = activation if j < len(sizes) - 2 else output_activation
        layers += [nn.Linear(sizes[j], sizes[j + 1]), act()]
    return nn.Sequential(*layers)


class Network1(nn.Module):
    def __init__(self, obs_dim, act_dim, hidden_sizes, activation):
        super().__init__()
        self.q = mlp([obs_dim + act_dim] + list(hidden_sizes) + [1], activation)

    def forward(self, obs, act):
        q = self.q(torch.cat([obs, act], dim=-1))
        return torch.squeeze(q, -1)  # Critical to ensure q has right shape.


class Network2(nn.Module):
    def __init__(self, obs_dim, act_dim, hidden_sizes, activation):
        super().__init__()
        self.q = mlp([obs_dim + act_dim] + list(hidden_sizes) + [1], activation)

    def forward(self, obs, act):
        q = self.q(torch.cat([obs, act], dim=-1))
        return torch.squeeze(q, -1)  # Critical to ensure q has right shape.


class Wrapper(nn.Module):
    def __init__(
        self,
        observation_space,
        action_space,
        hidden_sizes=(256, 256),
        activation=nn.ReLU,
    ):
        super().__init__()

        obs_dim = observation_space.shape[0]
        act_dim = action_space.shape[0]

        # build policy and value functions
        self.net1 = Network1(obs_dim, act_dim, hidden_sizes, activation)
        self.net2 = Network1(obs_dim, act_dim, hidden_sizes, activation)

    def forward(self, obs, act):

        # Perform a forward pass through all the networks and return the result
        q1 = self.net1(obs, act)
        q2 = self.net2(obs, act)
        return q1, q2


if __name__ == "__main__":

    # Create tensorboard writer
    writer = SummaryWriter("runs/exp-1")

    # Create environment
    env = gym.make("Pendulum-v0")

    # Create a wrapper network
    wrapper = Wrapper(
        observation_space=env.observation_space,
        action_space=env.action_space,
        hidden_sizes=(256, 256),
        activation=nn.ReLU,
    )

    # Add combined graph to tensorboard
    writer.add_graph(
        wrapper,
        (
            torch.Tensor(env.observation_space.sample()),
            torch.Tensor(env.action_space.sample()),
        ),
    )

    # Close tensorboard writer
    writer.close()

太好了,实测有效~

cht619 avatar Sep 16 '20 01:09 cht619