prototypical-networks-tensorflow icon indicating copy to clipboard operation
prototypical-networks-tensorflow copied to clipboard

I have some problem.

Open mostoo45 opened this issue 6 years ago • 19 comments

I currently use your code about ProtoNet-Omniglot.ipynb

I have not changed the code, but the accuracy and loss value is not changed.

I use tensorflow 1.3

mostoo45 avatar Feb 28 '18 07:02 mostoo45

Hi @mostoo45

How do I reproduce the issue?

abdulfatir avatar Feb 28 '18 07:02 abdulfatir

Hi abdulfatir Thank you first, I am studying your code. I got the following error in In [7]:

ValueError: Variable encoder/conv_1/conv2d/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

File "", line 3, in conv_block conv = tf.layers.conv2d(inputs, out_channels, kernel_size=3, padding='SAME') File "", line 3, in encoder net = conv_block(x, h_dim, name='conv_1') File "", line 16, in emb_x = encoder(tf.reshape(x, [num_classes * num_support, im_height, im_width, channels]), h_dim, z_dim)

so I added the following line. tf.reset_default_graph()-->add line x = tf.placeholder(tf.float32, [None, None, im_height, im_width, channels]) q = tf.placeholder(tf.float32, [None, None, im_height, im_width, channels]) x_shape = tf.shape(x)

mostoo45 avatar Feb 28 '18 08:02 mostoo45

@mostoo45 , I just ran this on my server and it worked flawless. FYI, I use tf 1.6

mijung-kim avatar Mar 22 '18 04:03 mijung-kim

I have problems reproducing results too.. tried to reproduce these results but the loss value doesnt change. I thought it could be a initializer issue and tried a few different initializers but no dice. Also tried tf 1.3 and tf 1.6 neither of them converge.

bdutta19 avatar Mar 26 '18 19:03 bdutta19

Instead of ipynb, I use py. By using py instead of ipynb, I got a loss and acc that is similar to the existing code.

mostoo45 avatar Mar 27 '18 00:03 mostoo45

Hi - are you CPU or GPUs .. I just tried by converting the code to a py file. below are the losses.. as you see they are not changing at all also here is a gist of my py file proto-net-omniglot

I dont see anything wrong with the code so will keep looking..

(tf16) ➜ Experiments python proto-nets-omnoglot.py (4112, 20, 28, 28) 2018-03-27 10:00:22.417629: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA [epoch 1/20, episode 50/100] => loss: 2.30259, acc: 0.10000 [epoch 1/20, episode 100/100] => loss: 2.30259, acc: 0.10000 [epoch 2/20, episode 50/100] => loss: 2.30259, acc: 0.10000 [epoch 2/20, episode 100/100] => loss: 2.30259, acc: 0.10000 [epoch 3/20, episode 50/100] => loss: 2.30259, acc: 0.10000 [epoch 3/20, episode 100/100] => loss: 2.30259, acc: 0.10000 [epoch 4/20, episode 50/100] => loss: 2.30259, acc: 0.10000 [epoch 4/20, episode 100/100] => loss: 2.30259, acc: 0.10000 [epoch 5/20, episode 50/100] => loss: 2.30259, acc: 0.10000 [epoch 5/20, episode 100/100] => loss: 2.30259, acc: 0.10000 [epoch 6/20, episode 50/100] => loss: 2.30259, acc: 0.10000 [epoch 6/20, episode 100/100] => loss: 2.30259, acc: 0.10000

bdutta19 avatar Mar 27 '18 15:03 bdutta19

I use both of them , tf 1.3 and python3

mostoo45 avatar Mar 27 '18 23:03 mostoo45

@mostoo45 Restarting the kernel with clearing an output worked for me

PytaichukBohdan avatar Apr 16 '18 11:04 PytaichukBohdan

Same problem to me.. I ran Proto-MiniImagenet and I got followings [epoch 93/100, episode 100/100] => loss: 2.99573, acc: 0.05000 [epoch 94/100, episode 50/100] => loss: 2.99573, acc: 0.05000 [epoch 94/100, episode 100/100] => loss: 2.99573, acc: 0.05000 [epoch 95/100, episode 50/100] => loss: 2.99573, acc: 0.05000 [epoch 95/100, episode 100/100] => loss: 2.99573, acc: 0.05000 [epoch 96/100, episode 50/100] => loss: 2.99573, acc: 0.05000 [epoch 96/100, episode 100/100] => loss: 2.99573, acc: 0.05000 [epoch 97/100, episode 50/100] => loss: 2.99573, acc: 0.05000 [epoch 97/100, episode 100/100] => loss: 2.99573, acc: 0.05000 [epoch 98/100, episode 50/100] => loss: 2.99573, acc: 0.05000 [epoch 98/100, episode 100/100] => loss: 2.99573, acc: 0.05000 [epoch 99/100, episode 50/100] => loss: 2.99573, acc: 0.05000 [epoch 99/100, episode 100/100] => loss: 2.99573, acc: 0.05000 [epoch 100/100, episode 50/100] => loss: 2.99573, acc: 0.05000 [epoch 100/100, episode 100/100] => loss: 2.99573, acc: 0.05000

same accuracy for every episode. And weird thing is that this wasn't happening for Proto-Omniglot. My tensorflow is for GPU and version 1.3

themis0888 avatar Apr 20 '18 12:04 themis0888

@themis0888, I think the problem should be you put your data into wrong place so the data actually is not fed into the model

ylfzr avatar Jun 05 '18 03:06 ylfzr

Many people are facing this issue. Can someone look into it?

abdulfatir avatar Sep 26 '18 15:09 abdulfatir

@guohan950106 Hello~ this is the student who faced this issue #1. Actually, I didn't face that problem anymore after that day. I did nothing but it just suddenly gone, so I could not figure out what was the problem. Maybe you can reimplement this on your own to solve this problem.

themis0888 avatar Sep 27 '18 03:09 themis0888

@abdulfatir I am facing same issue, this is output after running your code:

[[epoch 1/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 1/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 2/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 2/20, episode 100/100] => loss: 4.09434, acc: 0.01667 ^L[epoch 3/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 3/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 4/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 4/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 5/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 5/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 6/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 6/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 7/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 7/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 8/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 8/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 9/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 9/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 10/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 10/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 11/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 11/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 12/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 12/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 13/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 13/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 14/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 14/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 15/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 15/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 16/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 16/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 17/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 17/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 18/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 18/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 19/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 19/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 20/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 20/20, episode 100/100] => loss: 4.09434, acc: 0.01667

I found that problem is that, gradient is not flowing backward. It is zero at each step.

Did you find any solution? Any sugestion?

ankishb avatar Jan 19 '19 19:01 ankishb

awesome job!

NanYoMy avatar Mar 13 '19 01:03 NanYoMy

@abdulfatir I am facing same issue, this is output after running your code:

[[epoch 1/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 1/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 2/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 2/20, episode 100/100] => loss: 4.09434, acc: 0.01667 ^L[epoch 3/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 3/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 4/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 4/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 5/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 5/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 6/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 6/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 7/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 7/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 8/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 8/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 9/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 9/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 10/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 10/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 11/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 11/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 12/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 12/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 13/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 13/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 14/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 14/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 15/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 15/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 16/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 16/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 17/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 17/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 18/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 18/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 19/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 19/20, episode 100/100] => loss: 4.09434, acc: 0.01667 [epoch 20/20, episode 50/100] => loss: 4.09434, acc: 0.01667 [epoch 20/20, episode 100/100] => loss: 4.09434, acc: 0.01667

I found that problem is that, gradient is not flowing backward. It is zero at each step.

Did you find any solution? Any sugestion?

I found that what @ylfzr mentioned is the issue. I was getting the same numbers. It turns out that managing the folders in Colab can be a little messy and if you don't pay attention you can miss the right data location (it was my case).

sebastianpinedaar avatar Aug 30 '19 14:08 sebastianpinedaar

I also faced the problem with acc and loss unchanged

wdayang avatar Jul 04 '20 04:07 wdayang

@themis0888, I think the problem should be you put your data into wrong place so the data actually is not fed into the model

Yes, after managed the the place of the data. the acc and loss changes

wdayang avatar Jul 04 '20 04:07 wdayang

@themis0888, I think the problem should be you put your data into wrong place so the data actually is not fed into the model

Yes, after managed the the place of the data. the acc and loss changes

Hi @wdayang How did you manage the place of the data to have the acc and loss changing?

ali7amdi avatar Sep 22 '20 14:09 ali7amdi

If your acc and loss did not change at all after multiple episodes, it is most likely due to your dataset being misplaced. The correct location should be: prototypical-networks-tensorflow-master\data\omniglot\data\Alphabet_of_the_Magi, ,,,,, blah blah blah

[epoch 1/20, episode 5/100] => loss: 3.60291, acc: 0.43667 [epoch 1/20, episode 10/100] => loss: 3.25432, acc: 0.55667 [epoch 1/20, episode 15/100] => loss: 3.09199, acc: 0.57333 [epoch 1/20, episode 20/100] => loss: 2.91092, acc: 0.60333 [epoch 1/20, episode 25/100] => loss: 2.78092, acc: 0.59000 [epoch 1/20, episode 30/100] => loss: 2.63616, acc: 0.62667 [epoch 1/20, episode 35/100] => loss: 2.50083, acc: 0.61333 [epoch 1/20, episode 40/100] => loss: 2.40846, acc: 0.69000 [epoch 1/20, episode 45/100] => loss: 2.27202, acc: 0.72667 [epoch 1/20, episode 50/100] => loss: 2.05044, acc: 0.79000 [epoch 1/20, episode 55/100] => loss: 2.03263, acc: 0.78667 [epoch 1/20, episode 60/100] => loss: 1.90013, acc: 0.79667 [epoch 1/20, episode 65/100] => loss: 1.90940, acc: 0.74000 [epoch 1/20, episode 70/100] => loss: 1.69886, acc: 0.80333 [epoch 1/20, episode 75/100] => loss: 1.66013, acc: 0.81000 [epoch 1/20, episode 80/100] => loss: 1.66992, acc: 0.83333

HonFii avatar Jul 05 '22 02:07 HonFii