chainer-fast-neuralstyle
chainer-fast-neuralstyle copied to clipboard
volatile argument is not supported anymore. Use chainer.using_config
I am running into this error when trying to train, and i have no idea why
python train.py -s Beanslitter/DoubleRainbow.png -d dataset/train2014 -g 0
num traning images: 82783 82783 iterations, 2 epochs Traceback (most recent call last): File "train.py", line 100, in
feature_s = vgg(Variable(style_b, volatile=True)) File "/home/poop/anaconda2/lib/python2.7/site-packages/chainer/variable.py", line 307, in init kwargs, volatile='volatile argument is not supported anymore. ' File "/home/poop/anaconda2/lib/python2.7/site-packages/chainer/utils/argument.py", line 4, in check_unexpected_kwargs raise ValueError(message) ValueError: volatile argument is not supported anymore. Use chainer.using_config python train.py -s Beanslitter/DoubleRainbow.png -d dataset/train2014 -g 0
- Temporarily fixed it by removing all "volatile" lines in train.py Was also getting an error about "test=" in generate.py Solved that by removing all "test" entries in generate.py
Not sure what the function of them being there was Works perfectly without them
same issue @ChoclateRain how you remove volatile & test entries? you remove all the line? it doensnt work for me, thx alot
Also had this issue, fixed it by downgrading chainer version
pip install chainer==1.17.0
This problem was caused by the version update of chainer. The offical explains are:
In Chainer v2, the concept of training mode is added. It is represented by a thread-local flag chainer.config.train, which is a part of the unified configuration. When chainer.config.train is True, functions of Chainer run in the training mode, and otherwise they run in the test mode. For example, BatchNormalization and dropout() behave differently in each mode.
In Chainer v1, such a behavior was configured by the train or test argument of each function. This train/test argument has been removed in Chainer v2. If your code is using the train or test argument, you have to update it. In most cases, what you have to do is just removing the train / test argument from any function calls.
Also you can find examples here: https://docs.chainer.org/en/stable/upgrade.html#global-configurations
what is your vision of cuda and cudnn?@sebastianandreasson
Removing "volatile" from .py file (I removed volatile as a parameter of a chainer function) did not help me to resolve the issue. Has anyone else fixed this issue?
chainer.Variable(np.asarray(x_test[perm[j:j + batchsize]]))
@dianaow did you found any solution ?
Solution is written here I guess: https://docs.chainer.org/en/stable/reference/generated/chainer.Variable.html https://docs.chainer.org/en/stable/reference/generated/chainer.no_backprop_mode.html#chainer.no_backprop_mode
"volatile argument is not supported anymore since v2. Instead, use chainer.no_backprop_mode()."
x = chainer.Variable(np.array([1,], np.float32))
with chainer.no_backprop_mode():
y = x + 1
y.backward()
x.grad is None #True
Hence your computation were your variables created with : chainer.Variable( ... , volatile=True)
move in the with
statement.
As said in the doc, this operation "has the benefit of reducing memory consumption"
Could you please send an example code. my code looks like this: style_mats = [get_matrix(y) for y in nn.forward(Variable(img_style, volatile=True))] I don#t understand where to replace the img_style so that I can replace Variable(img_style, volatile=True with x My understanding was: style_mats = chainer.Variable(np.array([1,], np.float32)) with chainer.no_backprop_mode(): y = style_mats + 1 y.backward() style_mats.grad is None
I found a solution, I just deleted the code Now I am using: style_mats = [get_matrix(y) for y in nn.forward(Variable(img_style))]