invariant_rationalization icon indicating copy to clipboard operation
invariant_rationalization copied to clipboard

tensorflow version is wrong, errors in the code

Open ghost opened this issue 5 years ago • 2 comments

Hi I am trying to run the codes with the specified versions. got this error. thanks for your help

WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

  • https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  • https://github.com/tensorflow/addons
  • https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue.

Traceback (most recent call last): File "imdb_demo.py", line 162, in global_step = train_imdb(D_tr, inv_rnn, opts, global_step, args) File "/remote/svm/user.active/julia/dev/invariant_rationalization/train.py", line 30, in train_imdb inputs, masks, envs) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 679, in call outputs = self.call(inputs, *args, **kwargs) File "/remote/svm/user.active/julia/dev/invariant_rationalization/model.py", line 130, in call gen_outputs = self.generator(gen_embeddings) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 679, in call outputs = self.call(inputs, *args, **kwargs) File "/remote/.svm/user.active/julia/dev/invariant_rationalization/model.py", line 59, in call h = self.rnn(x) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/wrappers.py", line 533, in call return super(Bidirectional, self).call(inputs, **kwargs) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 679, in call outputs = self.call(inputs, *args, **kwargs) File "//user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/wrappers.py", line 633, in call y = self.forward_layer.call(inputs, **kwargs) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/cudnn_recurrent.py", line 110, in call output, states = self._process_batch(inputs, initial_state) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/cudnn_recurrent.py", line 302, in _process_batch rnn_mode='gru') File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/ops/gen_cudnn_rnn_ops.py", line 109, in cudnn_rnn ctx=_ctx) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/ops/gen_cudnn_rnn_ops.py", line 197, in cudnn_rnn_eager_fallback attrs=_attrs, ctx=_ctx, name=name) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from tensorflow.python.framework.errors_impl.InternalError: Could not find valid device for node. Node: {{node CudnnRNN}} All kernels registered for op CudnnRNN : device='GPU'; T in [DT_DOUBLE] device='GPU'; T in [DT_FLOAT] device='GPU'; T in [DT_HALF] [Op:CudnnRNN]

ghost avatar Jul 13 '20 08:07 ghost

Hi Julia,

It seems you the error comes from you do not have a GPU.

In terms of the error you mentioned in the email, please check your version of Tensorflow and system requirement in README.

code-terminator avatar Jul 13 '20 13:07 code-terminator

I have met the same issue. The reason is the corresponding version of cuda and cudnn is not installed. To solve it, just run: conda install cudatoolkit=10.0 conda install cudnn

srhthu avatar Jan 10 '22 14:01 srhthu