tensorboardX
tensorboardX copied to clipboard
writer.add_graph(mdl, args) executed without error but cannot find graphs on Tensorboard
I am able to use TensorboardX to view scalar values without any problem. Then I decided to use the add_graph
function of the same SummaryWriter to visualize the computation graph.
After a few rounds of try and error I was able to execute the add_graph
function successfully. However, my Tensorboard was not updated with a view of the computation graph.
I am using Pytorch v1.0.0.
I wonder what did I miss and if not, what is the correct way to access the saved graph from Tensorboard.
how did you use the function? On Sat, Jan 19, 2019 at 8:32 AM Victoria X Lin [email protected] wrote:
I am able to use TensorboardX to view scalar values without any problem. Then I decided to use the add_graph function of the same SummaryWriter to visualize the computation graph.
After a few rounds of try and error I was able to execute the add_graph function successfully. However, my Tensorboard was not updated with a view of the computation graph.
I wonder what did I miss and if not, what is the correct way to access the saved graph from Tensorboard.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/lanpa/tensorboardX/issues/340, or mute the thread https://github.com/notifications/unsubscribe-auth/AB6ZS9sgbTNLh5N7qPIJYXh2YWTUASeIks5vEmedgaJpZM4aI2yn .
-- send from my phone
This is a simple workflow that trains a sequence-to-sequence model. And this is how I'm using the function.
writer = SummaryWriter(log_dir=get_log_dir('tensorboard', self.args))
for epoch_id in range(self.start_epoch, self.num_epochs):
print('Epoch {}'.format(epoch_id))
# Update learning rate scheduler
lr_scheduler.step()
writer.add_scalar('learning_rate/{}'.format(self.dataset), lr_scheduler.get_lr()[0], epoch_id)
# Update model parameters
self.train()
self.batch_size = self.train_batch_size
batch_losses = []
for _ in tqdm(range(num_batches)):
self.optim.zero_grad()
loss = self.loss(mini_batch)
loss['loss_value'].backward()
if self.grad_norm > 0:
clip_grad_norm_(self.parameters(), self.grad_norm)
self.optim.step()
batch_losses.append(loss['printable_loss'])
writer.add_graph(self.mdl(formatted_batch[0], formatted_batch[1][0]), formatted_batch)
# Check training statistics
stdout_msg = 'Epoch {}: average training loss = {}'.format(epoch_id, np.mean(batch_losses))
writer.add_scalar('cross_entropy_loss/{}'.format(self.dataset), np.mean(batch_losses), epoch_id)
print(stdout_msg)
self.save_checkpoint(checkpoint_id=epoch_id, epoch_id=epoch_id)```
add_graph
should be only called once per SummaryWriter
. Is there any warning message?
No, there wasn't. Should I simply use add_graph
after the writer was created?
Yes, please check https://github.com/lanpa/tensorboardX/blob/master/examples/demo_graph.py
Hello, I am getting similar issue. I am using pytorch == 0.3.0 and tensorboardx == 1.1
I have followed demo_graph.py, add_graph() works without any error and also I have called it once. I am able to view other scalar value in tensorboard but I am unable to see any graph
Echo the message above. The issue remained unsolved for me.
@todpole3 I just released tensorboardX v 1.8. Would you have a try on that?