Ranger21 icon indicating copy to clipboard operation
Ranger21 copied to clipboard

RuntimeError: hit nan for variance_normalized

Open gcp opened this issue 3 years ago • 7 comments

Calling Ranger21 with mostly default parameters:

    optimizer = ranger21.Ranger21(
        net.parameters(), lr=0.001, num_epochs=50, weight_decay=1e-5,
        num_batches_per_epoch=len(train_loader)
    )

Training seems fine for half a day with decent progress on all loss metrics, but then halts:

File "./train_pt.py", line 727, in <module>
    main(sys.argv[1:])
  File "./train_pt.py", line 612, in main
    optimizer.step()
  File "/home/morbo/git/sjeng/train/venv19/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/home/morbo/git/sjeng/train/venv19/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/morbo/git/Ranger21/ranger21/ranger21.py", line 714, in step
    raise RuntimeError("hit nan for variance_normalized")
RuntimeError: hit nan for variance_normalized

gcp avatar Aug 31 '21 18:08 gcp

Am also seeing this.

swarmt avatar Sep 12 '21 21:09 swarmt

To be fair I'm also seeing this on Facebook's MADGRAD now, so I wonder if Adam/madgrad are just more likely to trigger this kind of divergence or if a bug slipped into the training data.

Basically one of the loss values NaN's and this causes the optimizer to instantly fail (I guess SGD just recovers if that happens).

gcp avatar Sep 13 '21 09:09 gcp

Reducing my learning rate solved it.

swarmt avatar Sep 13 '21 09:09 swarmt

I've had the same issue. Reducing the learning rate did help, but I'm at 1e-5 with default parameters and 1e-6 with madgrad still gave NaN on loss values. Curious if there's something else I can do.

TomStarshak avatar Sep 20 '21 22:09 TomStarshak

I've just hit it too :(

dnhkng avatar Sep 24 '21 05:09 dnhkng

I found my error. I had some training data with values way outside me expected range of 0-1 which I found by adding an assert in my dataloader.

swarmt avatar Sep 29 '21 20:09 swarmt

I integrated ranger21 into https://github.com/glinscott/nnue-pytorch and exploring different parameters. I'm hitting this issue always after first step of training.

This is what I'm using:

    optimizer = ranger21.Ranger21(train_params,
      lr=8.75e-4, betas=(.9, 0.999), eps=1.0e-7,
      using_gc=False, using_normgc=False,
      weight_decay=0,
      num_batches_per_epoch=int(self.epoch_size/self.batch_size), num_epochs=self.max_epochs,
      warmdown_active=False, use_warmup=False,
      use_adaptive_gradient_clipping=False,
      softplus=False,
      use_madgrad=True,
      pnm_momentum_factor=0.0)

changing lr, eps, weight_decay, use_adaptive_gradient_clipping, use_warmup appears to have no effect. The NaN comes from the forward pass in the second step, so some weights become NaN. Adam and AdaBelief cores work fine.

Sopel97 avatar Mar 17 '22 14:03 Sopel97