ganomaly
ganomaly copied to clipboard
How to test mnist and cifar10 datasets?
I don't see the test.py file in the folder. Do I need to write a test.py file to test the mnist and cifar10 data set? Or what parameters can I modify to test? anyone can help me? thank you !
python train.py --dataset mnist --abnormal_class 3 --nc 1 python train.py --dataset cifar10 --abnormal_class car
@zhouwei342622 you can follow three steps:
- Change the code 'model.train()' to 'model.test()' in the train.py
- Set self.opt.save_test_images='store_true' in the model.py file.
- python train.py --dataset cifar10 --abnormal_class car
I don't see the test.py file in the folder. Do I need to write a test.py file to test the mnist and cifar10 data set? Or what parameters can I modify to test? anyone can help me? thank you !
It's my test code:
def test(self):
""" Test GANomaly model.
Args:
dataloader ([type]): Dataloader for the test set
Raises:
IOError: Model weights not found.
"""
with torch.no_grad():
self.opt.save_test_images='store_true'
self.opt.load_weights='store_true'
# Load the weights of netg and netd.
if self.opt.load_weights:
path = '/home/lb/soft/ganomaly/output/ganomaly/cifar10/train/weights/netG.pth'#'"./output/{}/{}/train/weights/netG.pth".format(self.name.lower(), self.opt.dataset)
pretrained_dict = torch.load(path)['state_dict']
try:
self.netg.load_state_dict(pretrained_dict)
except IOError:
raise IOError("netG weights not found")
print(' Loaded weights.')
self.opt.phase = 'test'
# Create big error tensor for the test set.
self.an_scores = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.float32, device=self.device)
self.gt_labels = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.long, device=self.device)
self.latent_i = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device)
self.latent_o = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device)
print(" Testing model %s." % self.name)
self.times = []
self.total_steps = 0
epoch_iter = 0
for i, data in enumerate(self.dataloader['test'], 0):
self.total_steps += self.opt.batchsize
epoch_iter += self.opt.batchsize
time_i = time.time()
self.set_input(data)
self.fake, latent_i, latent_o = self.netg(self.input)
error = torch.mean(torch.pow((latent_i-latent_o), 2), dim=1)
time_o = time.time()
self.an_scores[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = error.reshape(error.size(0))
self.gt_labels[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = self.gt.reshape(error.size(0))
self.latent_i [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_i.reshape(error.size(0), self.opt.nz)
self.latent_o [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_o.reshape(error.size(0), self.opt.nz)
self.times.append(time_o - time_i)
# Save test images.
if self.opt.save_test_images:
dst = os.path.join(self.opt.outf, self.opt.name, 'test', 'images')
if not os.path.isdir(dst):
os.makedirs(dst)
real, fake, _ = self.get_current_images()
vutils.save_image(real, '%s/real_%03d.eps' % (dst, i+1), normalize=True)
vutils.save_image(fake, '%s/fake_%03d.eps' % (dst, i+1), normalize=True)
# Measure inference time.
self.times = np.array(self.times)
self.times = np.mean(self.times[:100] * 1000)
# Scale error vector between [0, 1]
self.an_scores = (self.an_scores - torch.min(self.an_scores)) / (torch.max(self.an_scores) - torch.min(self.an_scores))
# auc, eer = roc(self.gt_labels, self.an_scores)
auc = evaluate(self.gt_labels, self.an_scores, metric=self.opt.metric)
performance = OrderedDict([('Avg Run Time (ms/batch)', self.times), ('AUC', auc)])
print(auc,performance)
if self.opt.display_id > 0 and self.opt.phase == 'test':
counter_ratio = float(epoch_iter) / len(self.dataloader['test'].dataset)
self.visualizer.plot_performance(self.epoch, counter_ratio, performance)
return performance
@zhouwei342622 you can follow three steps:
- Change the code 'model.train()' to 'model.test()' in the train.py
- Set self.opt.save_test_images='store_true' in the model.py file.
- python train.py --dataset cifar10 --abnormal_class car
I can not achieve by this method,why?
I don't see the test.py file in the folder. Do I need to write a test.py file to test the mnist and cifar10 data set? Or what parameters can I modify to test? anyone can help me? thank you !
I don't see the test.py file in the folder. Do I need to write a test.py file to test the mnist and cifar10 data set? Or what parameters can I modify to test? anyone can help me? thank you !
It's my test code:
def test(self): """ Test GANomaly model. Args: dataloader ([type]): Dataloader for the test set Raises: IOError: Model weights not found. """ with torch.no_grad(): self.opt.save_test_images='store_true' self.opt.load_weights='store_true' # Load the weights of netg and netd. if self.opt.load_weights: path = '/home/lb/soft/ganomaly/output/ganomaly/cifar10/train/weights/netG.pth'#'"./output/{}/{}/train/weights/netG.pth".format(self.name.lower(), self.opt.dataset) pretrained_dict = torch.load(path)['state_dict'] try: self.netg.load_state_dict(pretrained_dict) except IOError: raise IOError("netG weights not found") print(' Loaded weights.') self.opt.phase = 'test' # Create big error tensor for the test set. self.an_scores = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.float32, device=self.device) self.gt_labels = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.long, device=self.device) self.latent_i = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device) self.latent_o = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device) print(" Testing model %s." % self.name) self.times = [] self.total_steps = 0 epoch_iter = 0 for i, data in enumerate(self.dataloader['test'], 0): self.total_steps += self.opt.batchsize epoch_iter += self.opt.batchsize time_i = time.time() self.set_input(data) self.fake, latent_i, latent_o = self.netg(self.input) error = torch.mean(torch.pow((latent_i-latent_o), 2), dim=1) time_o = time.time() self.an_scores[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = error.reshape(error.size(0)) self.gt_labels[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = self.gt.reshape(error.size(0)) self.latent_i [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_i.reshape(error.size(0), self.opt.nz) self.latent_o [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_o.reshape(error.size(0), self.opt.nz) self.times.append(time_o - time_i) # Save test images. if self.opt.save_test_images: dst = os.path.join(self.opt.outf, self.opt.name, 'test', 'images') if not os.path.isdir(dst): os.makedirs(dst) real, fake, _ = self.get_current_images() vutils.save_image(real, '%s/real_%03d.eps' % (dst, i+1), normalize=True) vutils.save_image(fake, '%s/fake_%03d.eps' % (dst, i+1), normalize=True) # Measure inference time. self.times = np.array(self.times) self.times = np.mean(self.times[:100] * 1000) # Scale error vector between [0, 1] self.an_scores = (self.an_scores - torch.min(self.an_scores)) / (torch.max(self.an_scores) - torch.min(self.an_scores)) # auc, eer = roc(self.gt_labels, self.an_scores) auc = evaluate(self.gt_labels, self.an_scores, metric=self.opt.metric) performance = OrderedDict([('Avg Run Time (ms/batch)', self.times), ('AUC', auc)]) print(auc,performance) if self.opt.display_id > 0 and self.opt.phase == 'test': counter_ratio = float(epoch_iter) / len(self.dataloader['test'].dataset) self.visualizer.plot_performance(self.epoch, counter_ratio, performance) return performance
Do you solve this problem?
def roc(labels, scores, saveto=None): """Compute ROC curve and ROC area for each class""" fpr = dict() tpr = dict() roc_auc = dict()
labels = labels.cpu()
scores = scores.cpu()
# True/False Positive Rates.
fpr, tpr, _ = roc_curve(labels, scores)
roc_auc = auc(fpr, tpr)
# Equal Error Rate
eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.)
if saveto:
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='(AUC = %0.2f, EER = %0.2f)' % (roc_auc, eer))
plt.plot([eer], [1-eer], marker='o', markersize=5, color="navy")
plt.plot([0, 1], [1, 0], color='navy', lw=1, linestyle=':')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig(os.path.join(saveto, "ROC.pdf"))
plt.close()
return roc_auc
How can show this?
@ zhouwei342622 您可以按照以下三个步骤操作:
- 将train.py中的代码“ model.train()”更改为“ model.test()”
- 在model.py文件中设置self.opt.save_test_images ='store_true'。
- python train.py --dataset cifar10 --abnormal_class汽车
Hello, I conducted the test according to the method you gave, but the output picture is similar to the Mosaic,so I would like to ask what is the reason.Another problem is that the number of normal samples in the test set is different from the number of abnormal samples, but the number of both samples is the same after the test output, and it is impossible to know which picture of the test set corresponds to the test output. Please kindly answer it.
def roc(labels, scores, saveto=None): """Compute ROC curve and ROC area for each class""" fpr = dict() tpr = dict() roc_auc = dict()
labels = labels.cpu() scores = scores.cpu() # True/False Positive Rates. fpr, tpr, _ = roc_curve(labels, scores) roc_auc = auc(fpr, tpr) # Equal Error Rate eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) if saveto: plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='(AUC = %0.2f, EER = %0.2f)' % (roc_auc, eer)) plt.plot([eer], [1-eer], marker='o', markersize=5, color="navy") plt.plot([0, 1], [1, 0], color='navy', lw=1, linestyle=':') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig(os.path.join(saveto, "ROC.pdf")) plt.close() return roc_auc
How can show this?
first, run visdom server: python -m visdom.server run webbrowser, and type http://localhost:8097, you can see plot in it
Thank you!!
------------------ 原始邮件 ------------------ 发件人: "jaxfang"<[email protected]>; 发送时间: 2020年7月17日(星期五) 中午11:45 收件人: "samet-akcay/ganomaly"<[email protected]>; 抄送: "木宇暄"<[email protected]>; "Comment"<[email protected]>; 主题: Re: [samet-akcay/ganomaly] How to test mnist and cifar10 datasets? (#57)
def roc(labels, scores, saveto=None):
"""Compute ROC curve and ROC area for each class"""
fpr = dict()
tpr = dict()
roc_auc = dict()
labels = labels.cpu() scores = scores.cpu() # True/False Positive Rates. fpr, tpr, _ = roc_curve(labels, scores) roc_auc = auc(fpr, tpr) # Equal Error Rate eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) if saveto: plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='(AUC = %0.2f, EER = %0.2f)' % (roc_auc, eer)) plt.plot([eer], [1-eer], marker='o', markersize=5, color="navy") plt.plot([0, 1], [1, 0], color='navy', lw=1, linestyle=':') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig(os.path.join(saveto, "ROC.pdf")) plt.close() return roc_auc
How can show this?
first, run visdom server: python -m visdom.server run webbrowser, and type http://localhost:8097, you can see plot in it
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
I don't see the test.py file in the folder. Do I need to write a test.py file to test the mnist and cifar10 data set? Or what parameters can I modify to test? anyone can help me? thank you !
It's my test code:
def test(self): """ Test GANomaly model. Args: dataloader ([type]): Dataloader for the test set Raises: IOError: Model weights not found. """ with torch.no_grad(): self.opt.save_test_images='store_true' self.opt.load_weights='store_true' # Load the weights of netg and netd. if self.opt.load_weights: path = '/home/lb/soft/ganomaly/output/ganomaly/cifar10/train/weights/netG.pth'#'"./output/{}/{}/train/weights/netG.pth".format(self.name.lower(), self.opt.dataset) pretrained_dict = torch.load(path)['state_dict'] try: self.netg.load_state_dict(pretrained_dict) except IOError: raise IOError("netG weights not found") print(' Loaded weights.') self.opt.phase = 'test' # Create big error tensor for the test set. self.an_scores = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.float32, device=self.device) self.gt_labels = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.long, device=self.device) self.latent_i = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device) self.latent_o = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device) print(" Testing model %s." % self.name) self.times = [] self.total_steps = 0 epoch_iter = 0 for i, data in enumerate(self.dataloader['test'], 0): self.total_steps += self.opt.batchsize epoch_iter += self.opt.batchsize time_i = time.time() self.set_input(data) self.fake, latent_i, latent_o = self.netg(self.input) error = torch.mean(torch.pow((latent_i-latent_o), 2), dim=1) time_o = time.time() self.an_scores[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = error.reshape(error.size(0)) self.gt_labels[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = self.gt.reshape(error.size(0)) self.latent_i [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_i.reshape(error.size(0), self.opt.nz) self.latent_o [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_o.reshape(error.size(0), self.opt.nz) self.times.append(time_o - time_i) # Save test images. if self.opt.save_test_images: dst = os.path.join(self.opt.outf, self.opt.name, 'test', 'images') if not os.path.isdir(dst): os.makedirs(dst) real, fake, _ = self.get_current_images() vutils.save_image(real, '%s/real_%03d.eps' % (dst, i+1), normalize=True) vutils.save_image(fake, '%s/fake_%03d.eps' % (dst, i+1), normalize=True) # Measure inference time. self.times = np.array(self.times) self.times = np.mean(self.times[:100] * 1000) # Scale error vector between [0, 1] self.an_scores = (self.an_scores - torch.min(self.an_scores)) / (torch.max(self.an_scores) - torch.min(self.an_scores)) # auc, eer = roc(self.gt_labels, self.an_scores) auc = evaluate(self.gt_labels, self.an_scores, metric=self.opt.metric) performance = OrderedDict([('Avg Run Time (ms/batch)', self.times), ('AUC', auc)]) print(auc,performance) if self.opt.display_id > 0 and self.opt.phase == 'test': counter_ratio = float(epoch_iter) / len(self.dataloader['test'].dataset) self.visualizer.plot_performance(self.epoch, counter_ratio, performance) return performance
Do you solve this problem?
Hi there, did you solve how to perform the testing of the dataset?
I don't see the test.py file in the folder. Do I need to write a test.py file to test the mnist and cifar10 data set? Or what parameters can I modify to test? anyone can help me? thank you !
It's my test code:
def test(self): """ Test GANomaly model. Args: dataloader ([type]): Dataloader for the test set Raises: IOError: Model weights not found. """ with torch.no_grad(): self.opt.save_test_images='store_true' self.opt.load_weights='store_true' # Load the weights of netg and netd. if self.opt.load_weights: path = '/home/lb/soft/ganomaly/output/ganomaly/cifar10/train/weights/netG.pth'#'"./output/{}/{}/train/weights/netG.pth".format(self.name.lower(), self.opt.dataset) pretrained_dict = torch.load(path)['state_dict'] try: self.netg.load_state_dict(pretrained_dict) except IOError: raise IOError("netG weights not found") print(' Loaded weights.') self.opt.phase = 'test' # Create big error tensor for the test set. self.an_scores = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.float32, device=self.device) self.gt_labels = torch.zeros(size=(len(self.dataloader['test'].dataset),), dtype=torch.long, device=self.device) self.latent_i = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device) self.latent_o = torch.zeros(size=(len(self.dataloader['test'].dataset), self.opt.nz), dtype=torch.float32, device=self.device) print(" Testing model %s." % self.name) self.times = [] self.total_steps = 0 epoch_iter = 0 for i, data in enumerate(self.dataloader['test'], 0): self.total_steps += self.opt.batchsize epoch_iter += self.opt.batchsize time_i = time.time() self.set_input(data) self.fake, latent_i, latent_o = self.netg(self.input) error = torch.mean(torch.pow((latent_i-latent_o), 2), dim=1) time_o = time.time() self.an_scores[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = error.reshape(error.size(0)) self.gt_labels[i*self.opt.batchsize : i*self.opt.batchsize+error.size(0)] = self.gt.reshape(error.size(0)) self.latent_i [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_i.reshape(error.size(0), self.opt.nz) self.latent_o [i*self.opt.batchsize : i*self.opt.batchsize+error.size(0), :] = latent_o.reshape(error.size(0), self.opt.nz) self.times.append(time_o - time_i) # Save test images. if self.opt.save_test_images: dst = os.path.join(self.opt.outf, self.opt.name, 'test', 'images') if not os.path.isdir(dst): os.makedirs(dst) real, fake, _ = self.get_current_images() vutils.save_image(real, '%s/real_%03d.eps' % (dst, i+1), normalize=True) vutils.save_image(fake, '%s/fake_%03d.eps' % (dst, i+1), normalize=True) # Measure inference time. self.times = np.array(self.times) self.times = np.mean(self.times[:100] * 1000) # Scale error vector between [0, 1] self.an_scores = (self.an_scores - torch.min(self.an_scores)) / (torch.max(self.an_scores) - torch.min(self.an_scores)) # auc, eer = roc(self.gt_labels, self.an_scores) auc = evaluate(self.gt_labels, self.an_scores, metric=self.opt.metric) performance = OrderedDict([('Avg Run Time (ms/batch)', self.times), ('AUC', auc)]) print(auc,performance) if self.opt.display_id > 0 and self.opt.phase == 'test': counter_ratio = float(epoch_iter) / len(self.dataloader['test'].dataset) self.visualizer.plot_performance(self.epoch, counter_ratio, performance) return performance
Do you solve this problem?
Hi there, did you solve how to perform the testing of the dataset?
运行完了后,只有真假照片,怎么看ROC曲线啊?我使用visdom什么也没显示
@ zhouwei342622 您可以按照以下三个步骤操作:
- 将train.py中的代码“ model.train()”更改为“ model.test()”
- 在model.py文件中设置self.opt.save_test_images ='store_true'。
- python train.py --dataset cifar10 --abnormal_class汽车
Hello, I conducted the test according to the method you gave, but the output picture is similar to the Mosaic,so I would like to ask what is the reason.Another problem is that the number of normal samples in the test set is different from the number of abnormal samples, but the number of both samples is the same after the test output, and it is impossible to know which picture of the test set corresponds to the test output. Please kindly answer it.
I have the same problem. Have you solved it? Thanks~