photorealistic_style_transfer
photorealistic_style_transfer copied to clipboard
About loss...
Thanks for your great work! When I train, I report an error: ValueError: When passing a list as loss, it should have one entry per model outputs. The model has 1 outputs, but you passed loss=['mse', <bound method WCT2.gram_loss of <model.WCT2 object at 0x7fa85e2974d0>>]
After modification:(self.wct.compile(optimizer=Adam(self.lr), loss=["mse", self.gram_loss]) ) to(self.wct.compile(optimizer=Adam(self.lr), loss=[self.gram_loss]))
It works;
But the loss is very bigger, What's the problem? thx
Epoch 1/10
1/1250 [..............................] - ETA: 1:54:33 - loss: 3585171783680.0000
2/1250 [..............................] - ETA: 1:01:06 - loss: 2059623989248.0000
3/1250 [..............................] - ETA: 43:16 - loss: 1529374932992.0000
4/1250 [..............................] - ETA: 34:19 - loss: 1247480791040.0000
5/1250 [..............................] - ETA: 28:56 - loss: 1053487792128.0000
6/1250 [..............................] - ETA: 25:21 - loss: 901079168341.3334
7/1250 [..............................] - ETA: 22:47 - loss: 797250371584.0000
8/1250 [..............................] - ETA: 20:52 - loss: 704853544960.0000
9/1250 [..............................] - ETA: 19:22 - loss: 639602908273.7778
10/1250 [..............................] - ETA: 18:11 - loss: 579075877683.2000
11/1250 [..............................] - ETA: 17:12 - loss: 532678425506.9091
12/1250 [..............................] - ETA: 16:23 - loss: 492782063616.0000
13/1250 [..............................] - ETA: 15:42 - loss: 458466281944.6154
Could you tell me your Tf version?
But the loss is very bigger, What's the problem? thx
It's Ok, Maybe I will add some visualization after a period of epochs to check if the result is good
Could you tell me your Tf version?
1.15.0
I train this model on Tf2.
I recommend you upgrade to TF2 or fix like this
https://stackoverflow.com/questions/51705464/keras-tensorflow-combined-loss-function-for-single-output
You need two loss functions: Mean_squared_error
and gram_loss
, You may want to combine them to single loss
How many epochs have you trained ?
I trained 10 epochs, and the results vis is very bad
Your modification self.wct.compile(optimizer=Adam(self.lr), loss=[self.gram_loss])
is missing mean_squared_error
loss function
python3 train.py --train-tfrec /content/tfrecords/train.tfrec\
--val-tfrec /content/tfrecords/val.tfrec\
--epochs 100\
--resume\
--batch-size 8\
--lr 2e-4\
I trained with this script
Your modification
self.wct.compile(optimizer=Adam(self.lr), loss=[self.gram_loss])
is missingmean_squared_error
loss function
in your code I not find the 'mean_squared_error'
Your modification
self.wct.compile(optimizer=Adam(self.lr), loss=[self.gram_loss])
is missingmean_squared_error
loss functionin your code I not find the 'mean_squared_error'
I fixed the bug; update tf 2.2 and I find the begin loss of [loss: 5860.0898]; is bigger
Does the result look better?
Still training
are you Vietnamese?
are you Vietnamese?
Yes
Nice to meet you. Is your research GAN?
yes, a little bit during my thesis
What social software do you use? We can communicate
Do you use Skype?
no, Wechat do you use ?