tensorflow-wavenet icon indicating copy to clipboard operation
tensorflow-wavenet copied to clipboard

No checkpoint found, despite running prevously

Open drdeaton opened this issue 7 years ago • 4 comments

Whenever I run train.py, I get the following message:

Trying to restore saved checkpoints from ./logdir/train/2018-07-24T21-45-30 ... No checkpoint found.
files length: 444

It then continues, starting anew from step 0 How can I get it to continue where it left off?

Also, it's probably worth mentioning that I get this whenever it saves a checkpoint:

Storing checkpoint to ./logdir/train/2018-07-24T21-45-30 ...WARNING:tensorflow:Issue encountered when serializing trainable_variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'filter_bias' has type str, but expected one of: int, long, bool
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'filter_bias' has type str, but expected one of: int, long, bool
 Done.

I don't think that is the issue, because when opening the checkpoint file in gedit, it seems like a valid cfg file format, but this could be to blame.

drdeaton avatar Jul 25 '18 02:07 drdeaton

i do that don't have this problem, now i train it 3500 step.

Yaque avatar Aug 04 '18 01:08 Yaque

@Dysproh did you solve the problem?

solmn1 avatar Sep 28 '18 07:09 solmn1

@Dysproh Has a solution been found for this problem?

Crystaldias avatar Feb 02 '19 13:02 Crystaldias

I am also running into this problem and I'm surprised this hasn't been fixed as well

neko-is-kitty avatar Dec 23 '21 20:12 neko-is-kitty