pytorch-CycleGAN-and-pix2pix
pytorch-CycleGAN-and-pix2pix copied to clipboard
How to plot losses after training is complete ?
My System was hanging that's why i set the display id to -1, now i want to display the losses via visdom server, how do i do it ? How do i plot the losses via loss_log.txt file?
Could you try to start the visdom server before the training?
python -m visdom.server
Yeah, at first i trained the model with Visdom server turned on, but then my model gets stuck, therefore i turned it off, now after training i want to visualize the loss plots. Looking forward for a way to display the loss plots after training is completed.
From: Jun-Yan Zhu [email protected] Sent: Saturday, October 10, 2020 10:57 PM To: junyanz/pytorch-CycleGAN-and-pix2pix [email protected] Cc: Maria-Siddiqua [email protected]; Author [email protected] Subject: Re: [junyanz/pytorch-CycleGAN-and-pix2pix] How to plot losses after training is complete ? (#1161)
Could you try to start the visdom server before the training?
python -m visdom.server
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1161#issuecomment-706587940, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ANTU64FEOH7UFKSYC7C7B33SKCOCBANCNFSM4SJ3VU3Q.
It is currently not supported by the repo. Feel free to write your own script such as matplotlib script.
Hi,
you can use this code to create a dataframe with the losses, where path_txt
is the path to the generated log file. This skips the part in brackets though, but this should be relatively easy to fix if you are interested in that as well.
import pandas as pd
path_txt = '/path/to/txt'
file1 = open(path_txt, 'r')
lines = file1.readlines()
dicts = list()
for i, line in enumerate(lines):
if i < 2:
continue
parts = line.split(') ')[1].split(' ')
parts.pop(-1)
dict_tmp = dict()
dict_tmp['D_A'] = float(parts[1])
dict_tmp['G_A'] = float(parts[3])
dict_tmp['cycle_A'] = float(parts[5])
dict_tmp['idt_A'] = float(parts[7])
dict_tmp['D_B'] = float(parts[9])
dict_tmp['G_B'] = float(parts[11])
dict_tmp['cycle_B'] = float(parts[13])
dict_tmp['idt_B'] = float(parts[15])
dicts.append(dict_tmp)
df = pd.DataFrame(dicts)
I also wrote a code to plot losses after training using the log file. The main benefit (in contrast to @phiwei 's code) is that it works for every model and not only CycleGAN. It's probably not perfect but it can be used as a baseline. NB : "experiment_name" is the path to the folder where the log file is, and do not forget to change nb_data.
import os
import matplotlib.pyplot as plt
import re
def generate_stats_from_log(experiment_name, line_interval=10, nb_data=10800, enforce_last_line=True):
"""
Generate chart with all losses from log file generated by CycleGAN/Pix2pix/CUT framework
"""
#extract every lines
with open(os.path.join(experiment_name, "loss_log.txt"), 'r') as f:
lines = f.readlines()
#choose the lines to use for plotting
lines_for_plot = []
for i in range(1,len(lines)):
if (i-1) % line_interval==0:
lines_for_plot.append(lines[i])
if enforce_last_line:
lines_for_plot.append(lines[-1])
#initialize dict with loss names
dicts = dict()
dicts["epoch"] = []
parts = (lines_for_plot[0]).split(') ')[1].split(' ')
for i in range(0, len(parts)//2):
dicts[parts[2*i][:-1]] = []
#extract all data
pattern = "epoch: ([0-9]+), iters: ([0-9]+)"
for l in lines_for_plot:
search = re.search(pattern, l)
epoch = int(search.group(1))
epoch_floatpart = int(search.group(2))/nb_data
dicts["epoch"].append(epoch+epoch_floatpart) #to allow several plots for the same epoch
parts = l.split(') ')[1].split(' ')
for i in range(0, len(parts)//2):
dicts[parts[2*i][:-1]].append(float(parts[2*i+1]))
#plot everything
plt.figure()
for key in dicts.keys():
if key != "epoch":
plt.plot(dicts["epoch"], dicts[key], label=key)
plt.legend(loc="best")
plt.show()
In the log file, "iters" is the number of the current processed image. In my code, I specify "nb_data" to order losses for plotting. I give you an example : if you have 1000 images for training (nb_data=1000). "epoch: 1, iters: 100" corresponds to calculated_epoch=1+100/nb_data=1.1 "epoch: 1, iters: 200" corresponds to calculated_epoch=1+200/nb_data=1.2 ... "epoch: 2, iters: 500" corresponds to epoch 2+500/nb_data=2.5 I use these "calculated_epoch" for the horizontal axis in the plot.
I hope it is clear, so to answer your question, just put the number of training images for nb_data.
Le 25/03/2022 à 08:07, michaelku1 a écrit :
I also wrote a code to plot losses after training using the log file. The main benefit (in contrast to @phiwei <https://github.com/phiwei> 's code) is that it works for every model and not only CycleGAN. It's probably not perfect but it can be used as a baseline. NB : "experiment_name" is the path to the folder where the log file is, and do not forget to change nb_data. |import os import matplotlib.pyplot as plt import re def generate_stats_from_log(experiment_name, line_interval=10, nb_data=10800, enforce_last_line=True): """ Generate chart with all losses from log file generated by CycleGAN/Pix2pix/CUT framework """ #extract every lines with open(os.path.join(experiment_name, "loss_log.txt"), 'r') as f: lines = f.readlines() #choose the lines to use for plotting lines_for_plot = [] for i in range(1,len(lines)): if (i-1) % line_interval==0: lines_for_plot.append(lines[i]) if enforce_last_line: lines_for_plot.append(lines[-1]) #initialize dict with loss names dicts = dict() dicts["epoch"] = [] parts = (lines_for_plot[0]).split(') ')[1].split(' ') for i in range(0, len(parts)//2): dicts[parts[2*i][:-1]] = [] #extract all data pattern = "epoch: ([0-9]+), iters: ([0-9]+)" for l in lines_for_plot: search = re.search(pattern, l) epoch = int(search.group(1)) epoch_floatpart = int(search.group(2))/nb_data dicts["epoch"].append(epoch+epoch_floatpart) #to allow several plots for the same epoch parts = l.split(') ')[1].split(' ') for i in range(0, len(parts)//2): dicts[parts[2*i][:-1]].append(float(parts[2*i+1])) #plot everything plt.figure() for key in dicts.keys(): if key != "epoch": plt.plot(dicts["epoch"], dicts[key], label=key) plt.legend(loc="best") plt.show() |
It'd be helpful if you can tell us what nb_data is, it doesn't quite make sense to me.
— Reply to this email directly, view it on GitHub https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1161#issuecomment-1078720645, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWAXYPDV3YIUZSZKTYZ3YODVBVQ2JANCNFSM4SJ3VU3Q. You are receiving this because you commented.Message ID: @.***>
Yea I managed to figure it out later after I deleted comment. Thanks very much.
In the log file, "iters" is the number of the current processed image. In my code, I specify "nb_data" to order losses for plotting. I give you an example : if you have 1000 images for training (nb_data=1000). "epoch: 1, iters: 100" corresponds to calculated_epoch=1+100/nb_data=1.1 "epoch: 1, iters: 200" corresponds to calculated_epoch=1+200/nb_data=1.2 ... "epoch: 2, iters: 500" corresponds to epoch 2+500/nb_data=2.5 I use these "calculated_epoch" for the horizontal axis in the plot. I hope it is clear, so to answer your question, just put the number of training images for nb_data. Le 25/03/2022 à 08:07, michaelku1 a écrit : … I also wrote a code to plot losses after training using the log file. The main benefit (in contrast to @phiwei https://github.com/phiwei 's code) is that it works for every model and not only CycleGAN. It's probably not perfect but it can be used as a baseline. NB : "experiment_name" is the path to the folder where the log file is, and do not forget to change nb_data. |import os import matplotlib.pyplot as plt import re def generate_stats_from_log(experiment_name, line_interval=10, nb_data=10800, enforce_last_line=True): """ Generate chart with all losses from log file generated by CycleGAN/Pix2pix/CUT framework """ #extract every lines with open(os.path.join(experiment_name, "loss_log.txt"), 'r') as f: lines = f.readlines() #choose the lines to use for plotting lines_for_plot = [] for i in range(1,len(lines)): if (i-1) % line_interval==0: lines_for_plot.append(lines[i]) if enforce_last_line: lines_for_plot.append(lines[-1]) #initialize dict with loss names dicts = dict() dicts["epoch"] = [] parts = (lines_for_plot[0]).split(') ')[1].split(' ') for i in range(0, len(parts)//2): dicts[parts[2i][:-1]] = [] #extract all data pattern = "epoch: ([0-9]+), iters: ([0-9]+)" for l in lines_for_plot: search = re.search(pattern, l) epoch = int(search.group(1)) epoch_floatpart = int(search.group(2))/nb_data dicts["epoch"].append(epoch+epoch_floatpart) #to allow several plots for the same epoch parts = l.split(') ')[1].split(' ') for i in range(0, len(parts)//2): dicts[parts[2i][:-1]].append(float(parts[2i+1])) #plot everything plt.figure() for key in dicts.keys(): if key != "epoch": plt.plot(dicts["epoch"], dicts[key], label=key) plt.legend(loc="best") plt.show() | It'd be helpful if you can tell us what nb_data is, it doesn't quite make sense to me. — Reply to this email directly, view it on GitHub <#1161 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWAXYPDV3YIUZSZKTYZ3YODVBVQ2JANCNFSM4SJ3VU3Q. You are receiving this because you commented.Message ID: @.**>
I have this error testing the script provided by @GuigzoS.
AttributeError: 'NoneType' object has no attribute 'group'
, I checked the regex pattern used I was able to retrieve the data.