bert icon indicating copy to clipboard operation
bert copied to clipboard

module 'tensorflow_core._api.v2.train' has no attribute 'Optimizer'

Open ccoay opened this issue 5 years ago • 32 comments

Traceback (most recent call last): File "run_classifier.py", line 25, in import optimization File "D:\bertpro\bert\optimization.py", line 87, in class AdamWeightDecayOptimizer(tf.train.Optimizer): AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'Optimizer'

ccoay avatar Nov 24 '19 06:11 ccoay

I met the same problem with tensorflow 1.15.0.

kahy-shen avatar Nov 29 '19 09:11 kahy-shen

same issue here!

andaziemele avatar Dec 02 '19 10:12 andaziemele

Same issue. My tensorflow-gpu version is 2.0.0

Shawnsyx avatar Dec 12 '19 09:12 Shawnsyx

same issue My tensorflow version is 1.11.0

Reraaan avatar Dec 13 '19 06:12 Reraaan

It became a warning instead with the 1.14 version.

andaziemele avatar Dec 13 '19 10:12 andaziemele

Same issue with Tensorflow cpu 2.0.0 on mac !

fabiosoto avatar Dec 17 '19 12:12 fabiosoto

Same issue. My tensorflow cpu version is 2.0.0 .

only-yao avatar Dec 19 '19 03:12 only-yao

change tensorflow version to 1.14

BestyWang avatar Dec 19 '19 07:12 BestyWang

change class AdamWeightDecayOptimizer(tf.train.Optimizer) to class AdamWeightDecayOptimizer(tf.compat.v1.train.Optimizer)

AndreasWieg avatar Dec 24 '19 07:12 AndreasWieg

This is going to become more prevalent as more users switch to TensorFlow 2. I would suggest that the developers start a new branch for TensorFlow 2 development. As more people switch, that branch can eventually become master. Might want to tag a final release based on TensorFlow 1 or something before cutting over.

GodloveD avatar Jan 03 '20 15:01 GodloveD

I am experiencing this as I am trying to execute a model in a new environment using Tensorflow 2.0, and GPU. Probably need to figure out other migration and upgrading approach!.

ibrahimishag avatar Jan 07 '20 07:01 ibrahimishag

Francois Chollet: Keras v2.3.0 is the first release of Keras that brings keras in sync with tf.keras

It will be the the last major release to support backends other than TensorFlow (i.e., Theano, CNTK, etc.)

And most importantly, deep learning practitioners should start moving to TensorFlow 2.0 and the tf.keras package

People who are starting new projects are going to use tensorflow 2, is there a place or way to get this working with tf2. Probably a beta version which works with tf2 without errors

binodmainali avatar Jan 28 '20 05:01 binodmainali

Someone have a solution?

I'm trying to follows thi tutorial:

https://cloud.google.com/tpu/docs/tutorials/bert?authuser=1

girottoma avatar Jan 30 '20 04:01 girottoma

There is a now BERT 2.x tutorial (runs TF 2.1) at https://cloud.google.com/tpu/docs/tutorials.

gmadrone avatar Jan 31 '20 01:01 gmadrone

That's good I guess, but, unless I'm missing something (which I totally could be) it really doesn't solve the issue that the code in this repo won't work with TF 2 right?

GodloveD avatar Jan 31 '20 04:01 GodloveD

Right, this code needs TF 1.15. I was responding to girottoma's comment above. In https://github.com/google-research/bert/blob/master/README.md, there is a section "Fine-tuning with Cloud TPUs" that references the Cloud TPU BERT tutorial (there are now 2 of them) and a BERT colab at https://cloud.google.com/tpu/docs/tutorials/. The BERT 2.x tutorial uses TF 2.1 and the BERT 1.x tutorial uses TF 1.1.5 (this tutorial was broken, but will be fixed and republished next week).

gmadrone avatar Feb 01 '20 00:02 gmadrone

Haha! My fault. I knew I was missing something. 🤤

Thanks for clarification.

GodloveD avatar Feb 01 '20 03:02 GodloveD

There was a problem with the TF 1.15 BERT tutorial (https://cloud.google.com/tpu/docs/tutorials/bert) that was causing it to fail. It has been fixed and the BERT 1.x tutorial should run correctly now.

gmadrone avatar Feb 10 '20 18:02 gmadrone

Hey guys,i think i got the same issue cause i'm using tensorflow-gpu 1.15.0 and tensorflow 1.15.0 but when i train into the dataset of SQUAD i got INFO:tensorflow:impossible example

Anyone has a clue ?

chouaib-benali avatar Feb 10 '20 19:02 chouaib-benali

same issue using google colab any solution ????

mirabdolbaghi2 avatar Mar 28 '20 03:03 mirabdolbaghi2

I fixed the issue in google colab by installing tensorflow 1.15 instead of 2. I get a warning only. !pip install tensorflow-gpu==1.15.0

shimafoolad avatar May 12 '20 10:05 shimafoolad

I am also facing the same issue --

AttributeError Traceback (most recent call last) in 1 import bert ----> 2 from bert import run_classifier 3 from bert import optimization 4 from bert import tokenization 5 from tqdm import tqdm

~\bert\run_classifier.py in 23 import os 24 from bert import modeling ---> 25 from bert import optimization 26 from bert import tokenization 27 import tensorflow as tf

~\bert\optimization.py in 85 86 ---> 87 class AdamWeightDecayOptimizer(tf.train.Optimizer): 88 """A basic Adam optimizer that includes "correct" L2 weight decay.""" 89

AttributeError: module 'tensorflow._api.v2.train' has no attribute 'Optimizer'

SaurabhBhatia0211 avatar May 18 '20 14:05 SaurabhBhatia0211

same issue using google colab any solution ????

Hi, Did you get a reply on how to solve this issue?

Tarun3679 avatar May 21 '20 12:05 Tarun3679

I am also facing the same issue --

AttributeError Traceback (most recent call last) in 1 import bert ----> 2 from bert import run_classifier 3 from bert import optimization 4 from bert import tokenization 5 from tqdm import tqdm

~\bert\run_classifier.py in 23 import os 24 from bert import modeling ---> 25 from bert import optimization 26 from bert import tokenization 27 import tensorflow as tf

~\bert\optimization.py in 85 86 ---> 87 class AdamWeightDecayOptimizer(tf.train.Optimizer): 88 """A basic Adam optimizer that includes "correct" L2 weight decay.""" 89

AttributeError: module 'tensorflow._api.v2.train' has no attribute 'Optimizer'

This can be solved by changing tf.train.Optimizer to tf.compat.v1.train.AdamOptimizer()

Tarun3679 avatar May 22 '20 16:05 Tarun3679

just in case this helps someone: one thing that has helped me solve this issue is to ensure my python==2.x (note: not 3.x which is default these days), tensorflow==1.15.0

mithunpaul08 avatar May 28 '20 07:05 mithunpaul08

I am also facing the same issue --

AttributeError Traceback (most recent call last) in 1 import bert ----> 2 from bert import run_classifier 3 from bert import optimization 4 from bert import tokenization 5 from tqdm import tqdm ~\bert\run_classifier.py in 23 import os 24 from bert import modeling ---> 25 from bert import optimization 26 from bert import tokenization 27 import tensorflow as tf ~\bert\optimization.py in 85 86 ---> 87 class AdamWeightDecayOptimizer(tf.train.Optimizer): 88 """A basic Adam optimizer that includes "correct" L2 weight decay.""" 89 AttributeError: module 'tensorflow._api.v2.train' has no attribute 'Optimizer'

This can be solved by changing tf.train.Optimizer to tf.compat.v1.train.AdamOptimizer()

Got the error below after making the above changes.

Traceback (most recent call last): File "run_classifier.py", line 29, in flags = tf.flags AttributeError: module 'tensorflow' has no attribute 'flags'

Triggered a bunch of other function errors after changing flags = tf.flags to tf.compat.v1.flags. So went ahead made the following global change, which is probably equivalent to setting tf version to 1.15.0.

tf = tensorflow.compat.v1

Then the following error occurred. (The same error happens when setting tf version to 1.15.)

AttributeError: module 'bert.tokenization' has no attribute 'validate_case_matches_checkpoint'

Seems bert.tokenization does have this function defined though. Any thoughts how to fix?

sshen2020 avatar Jul 10 '20 18:07 sshen2020

change class AdamWeightDecayOptimizer(tf.train.Optimizer) to class AdamWeightDecayOptimizer(tf.compat.v1.train.Optimizer)

this could work!!

Sullivan6fu avatar Mar 03 '21 16:03 Sullivan6fu

I am also facing the same issue --

AttributeError Traceback (most recent call last) in 1 import bert ----> 2 from bert import run_classifier 3 from bert import optimization 4 from bert import tokenization 5 from tqdm import tqdm ~\bert\run_classifier.py in 23 import os 24 from bert import modeling ---> 25 from bert import optimization 26 from bert import tokenization 27 import tensorflow as tf ~\bert\optimization.py in 85 86 ---> 87 class AdamWeightDecayOptimizer(tf.train.Optimizer): 88 """A basic Adam optimizer that includes "correct" L2 weight decay.""" 89 AttributeError: module 'tensorflow._api.v2.train' has no attribute 'Optimizer'

This can be solved by changing tf.train.Optimizer to tf.compat.v1.train.AdamOptimizer()

Got the error below after making the above changes.

Traceback (most recent call last): File "run_classifier.py", line 29, in flags = tf.flags AttributeError: module 'tensorflow' has no attribute 'flags'

Triggered a bunch of other function errors after changing flags = tf.flags to tf.compat.v1.flags. So went ahead made the following global change, which is probably equivalent to setting tf version to 1.15.0.

tf = tensorflow.compat.v1

Then the following error occurred. (The same error happens when setting tf version to 1.15.)

AttributeError: module 'bert.tokenization' has no attribute 'validate_case_matches_checkpoint'

Seems bert.tokenization does have this function defined though. Any thoughts how to fix?

I have same problem

kuvaibhav avatar Mar 07 '21 11:03 kuvaibhav

class AdamWeightDecayOptimizer(tf.compat.v1.train.Optimizer):

iamshuvra avatar Apr 22 '22 08:04 iamshuvra

Just change the code from optimizer=tf.train.Adamoptimizer() to optimizer='adam'

jungengzhou avatar May 27 '22 09:05 jungengzhou