gluonts icon indicating copy to clipboard operation
gluonts copied to clipboard

Getting NaN loss after some time with DeepAR and NegativeBinomial

Open fernandocamargoai opened this issue 4 years ago • 47 comments

Description

I have a dataset with 129k time series of demand for products within some branches of a retailer. I'm having trouble with NaN loss ever since I started using GluonTS. At first, I discovered some NaN and Infinity values within my FEAT_DYNAMIC_REAL. But even after solving those issues, I'm still getting NaN loss after some time. With the new release (which included some improvements on the NegativeBinomial, I'm getting a much lower loss value and it's taking longer for the problem to occur, but it still occurs.

I decided to put a conditional breakpoint in the log_prob() to stop whenever a NaN value is generated and used hybridize=False. It's super strange, but the error didn't happen yet (It's far past the point that it usually happens). I'll wait some more time, but it might be the case that it only happens with hybridize=True? If that's the case, it'd be very odd and problematic, since it runs much slower with hybridize=False.

To Reproduce

Unfortunately, I can't provide the dataset to reproduce the problem. But here are some information:

  • Number of Time Series: 128.976
  • TARGET: Count values, with a lot of zeros
  • FEAT_STATIC_CAT: I'm using categorical features that identify the product (its id, its category, subcategory, and so on) and the branch (its id, its city, state, and so on).
  • FEAT_DYNAMIC_REAL: I'm using holidays in one-hot encoding, a binary identifying if there's a special price and the relative discount (calculated using the discount and the price).
  • batch_size: 32
  • num_batches_per_epoch: 10000
  • learning_rate: 1e-4
  • clip_gradient: 1.0
  • patience: 10
  • minimum_learning_rate: 5e-5
  • weight_decay: 1e-8
  • num_layers: 2
  • num_cells: 100
  • embedding_dimension: [2, 7, 20, 50, 10, 30, 60, 80, 200]
  • context_length: 90
  • scaling: True
  • freq: 1D
put code here

Error message or code output

Here is one example where it happened in the first epoch. But sometimes it happened later.

Traceback (most recent call last):
  File "/opt/anaconda3/envs/gazin/lib/python3.6/site-packages/luigi/worker.py", line 199, in run
    new_deps = self._run_get_new_deps()
  File "/opt/anaconda3/envs/gazin/lib/python3.6/site-packages/luigi/worker.py", line 141, in _run_get_new_deps
    task_gen = self.task.run()
  File "/root/work/log_inteligente_ai/gazin/task/model/base.py", line 164, in run
    dict(fit=self.fit, cross_validation=self.cross_validate)[self.training_type]()
  File "/root/work/log_inteligente_ai/gazin/task/model/gluonts.py", line 425, in fit
    predictor = self.train_model()
  File "/root/work/log_inteligente_ai/gazin/task/model/gluonts.py", line 416, in train_model
    train_iter=training_data_loader,
  File "/opt/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/trainer/_base.py", line 328, in __call__
    "Got NaN in first epoch. Try reducing initial learning rate."
gluonts.core.exception.GluonTSUserError: Got NaN in first epoch. Try reducing initial learning rate.

Environment

  • Operating system: Linux
  • Python version: 3.6
  • GluonTS version: 0.5.0

fernandocamargoai avatar May 18 '20 14:05 fernandocamargoai

maybe lower num_batches_per_epoch? I guess learning_rate is already low but the num_batches_per_epoch is high, so that during the first epoch training, arguments of distribution had numerical problems in calculating log prob.

samosun avatar May 19 '20 03:05 samosun

@samosun, my logic in increasing the num_batches_per_epoch is that I think each time-series should have a more or less equal chance of having a slice of it being part of each epoch. I mean, if I have a batch size of 32, 128976 / 32 = 4030,5. I'm actually using 4000 of num_batches_per_epoch right now (that number was because I had 300k+ time-series before and now I'm filtering some of them). Doesn't it make sense? Because I imagine if I was using a small num_batches_per_epoch, a given time-series could appear once in the first epoch and appear again several epochs later, which doesn't make sense to me.

And I don't understand it would create numerical problems. Isn't the loss computed per batch, anyway? In my understanding, the gradient descendent doesn't even "know" about the concept of epochs. Only we know because it's when we take the avg_loss and is a possible point of stopping the training.

fernandocamargoai avatar May 19 '20 10:05 fernandocamargoai

In my understanding, the gradient descendent doesn't even "know" about the concept of epochs.

yes, I think you are right. num_batches_per_epoch is not the root cause. I thought if arguments mu,alpha in NegativeBinomial was projected into "boundary values" like 0, it will be failed in calculating log prob, but I noticed that they had solved this problem by adding epsilon, see https://github.com/awslabs/gluon-ts/blob/master/src/gluonts/distribution/neg_binomial.py#L114

so there may be other mysteries to be found if it's reproducible

import mxnet as mx
from gluonts.distribution import NegativeBinomialOutput

distr_output = NegativeBinomialOutput()
distr = distr_output.distribution(
    (mx.nd.array([0]), # if mu=0
     mx.nd.array([1]))
)
x = mx.nd.array([0])
ll = distr.log_prob(x)
print(ll) # output nan

samosun avatar May 19 '20 13:05 samosun

@samosun, I think I can confirm that it only happens with hybridize=True. I left it training with hybridize=False and it's in the 22th epoch now without NaN. Also, the model converges much faster with hybridize=False (0.147 at first epoch vs 0.28 at 4th epoch and then the NaN happened. There shouldn't be any difference besides speed with hybridize, but it seems that's something else happening.

fernandocamargoai avatar May 19 '20 15:05 fernandocamargoai

@fernandocamargoti are you using mxnet 1.6.0? And the DeepAREstimator?

It’s very weird indeed that this shows up with hybridize=True only. One place where things could go wrong is this https://github.com/awslabs/gluon-ts/blob/0d963f7dc55ef866d86e33633a28d57dfab33adb/src/gluonts/distribution/neg_binomial.py#L114 but still this should happen with and without hybridization, unless some subtle bug in mxnet is there

lostella avatar May 20 '20 06:05 lostella

TARGET: Count values, with a lot of zeros

What’s the fraction of zero values, roughly? It would be nice if we could to reproduce the issue

lostella avatar May 20 '20 06:05 lostella

@lostella, yes, I'm using DeepAREstimator with mxnet 1.6.0. I'll get the fraction and get back to you today.

Look at how faster the training with hybridize=False converges:

100%|###################################################################################################################################################################| 4000/4000 [58:16<00:00,  1.14it/s, epoch=1/4, avg_epoch_loss=0.219]
2020-05-19 21:53:41,841 : INFO : Epoch[0] Elapsed time 3496.287 seconds
2020-05-19 21:53:41,842 : INFO : Epoch[0] Evaluation metric 'epoch_loss'=0.218906
2020-05-19 21:53:41,853 : INFO : Epoch[1] Learning rate is 0.0001
100%|###################################################################################################################################################################| 4000/4000 [59:31<00:00,  1.12it/s, epoch=2/4, avg_epoch_loss=0.164]
2020-05-19 22:53:13,262 : INFO : Epoch[1] Elapsed time 3571.409 seconds
2020-05-19 22:53:13,262 : INFO : Epoch[1] Evaluation metric 'epoch_loss'=0.164086
2020-05-19 22:53:13,273 : INFO : Epoch[2] Learning rate is 0.0001
100%|###################################################################################################################################################################| 4000/4000 [59:35<00:00,  1.12it/s, epoch=3/4, avg_epoch_loss=0.155]
2020-05-19 23:52:48,752 : INFO : Epoch[2] Elapsed time 3575.479 seconds
2020-05-19 23:52:48,752 : INFO : Epoch[2] Evaluation metric 'epoch_loss'=0.154580
2020-05-19 23:52:48,761 : INFO : Epoch[3] Learning rate is 0.0001
100%|###################################################################################################################################################################| 4000/4000 [59:29<00:00,  1.12it/s, epoch=4/4, avg_epoch_loss=0.151]
2020-05-20 00:52:18,165 : INFO : Epoch[3] Elapsed time 3569.404 seconds
2020-05-20 00:52:18,166 : INFO : Epoch[3] Evaluation metric 'epoch_loss'=0.150852
2020-05-20 00:52:18,180 : INFO : Loading parameters from best epoch (3)
2020-05-20 00:52:18,189 : INFO : Final loss: 0.15085175694245845 (occurred at epoch 3)

Compared to the same training (every single hyperparameter is equal and I have seed in the random, numpy.random and mxnet) with hybridize=True:

100%|██████████| 4000/4000 [22:33<00:00,  2.95it/s, epoch=1/100, avg_epoch_loss=0.348]
2020-05-19 18:29:27,221 : INFO : Epoch[0] Elapsed time 1353.823 seconds
2020-05-19 18:29:27,223 : INFO : Epoch[0] Evaluation metric 'epoch_loss'=0.347671
2020-05-19 18:29:27,245 : INFO : Epoch[1] Learning rate is 0.0001
100%|██████████| 4000/4000 [34:43<00:00,  1.92it/s, epoch=2/100, avg_epoch_loss=0.289]
2020-05-19 19:04:10,699 : INFO : Epoch[1] Elapsed time 2083.454 seconds
2020-05-19 19:04:10,699 : INFO : Epoch[1] Evaluation metric 'epoch_loss'=0.289498
2020-05-19 19:04:10,706 : INFO : Epoch[2] Learning rate is 0.0001
100%|██████████| 4000/4000 [34:49<00:00,  1.91it/s, epoch=3/100, avg_epoch_loss=0.281]
2020-05-19 19:39:00,636 : INFO : Epoch[2] Elapsed time 2089.929 seconds
2020-05-19 19:39:00,636 : INFO : Epoch[2] Evaluation metric 'epoch_loss'=0.280888
2020-05-19 19:39:00,643 : INFO : Epoch[3] Learning rate is 0.0001
100%|██████████| 4000/4000 [44:04<00:00,  1.51it/s, epoch=4/100, avg_epoch_loss=0.275]
2020-05-19 20:23:05,000 : INFO : Epoch[3] Elapsed time 2644.357 seconds
2020-05-19 20:23:05,000 : INFO : Epoch[3] Evaluation metric 'epoch_loss'=0.274745
2020-05-19 20:23:05,007 : INFO : Epoch[4] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:46<00:00,  1.46it/s, epoch=5/100, avg_epoch_loss=0.273]
2020-05-19 21:08:51,684 : INFO : Epoch[4] Elapsed time 2746.677 seconds
2020-05-19 21:08:51,684 : INFO : Epoch[4] Evaluation metric 'epoch_loss'=0.273488
2020-05-19 21:08:51,690 : INFO : Epoch[5] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:50<00:00,  1.45it/s, epoch=6/100, avg_epoch_loss=0.267]
2020-05-19 21:54:41,773 : INFO : Epoch[5] Elapsed time 2750.083 seconds
2020-05-19 21:54:41,773 : INFO : Epoch[5] Evaluation metric 'epoch_loss'=0.266905
2020-05-19 21:54:41,779 : INFO : Epoch[6] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:00<00:00,  1.45it/s, epoch=7/100, avg_epoch_loss=0.269]
2020-05-19 22:40:42,483 : INFO : Epoch[6] Elapsed time 2760.703 seconds
2020-05-19 22:40:42,483 : INFO : Epoch[6] Evaluation metric 'epoch_loss'=0.268658
2020-05-19 22:40:42,483 : INFO : Epoch[7] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:41<00:00,  1.46it/s, epoch=8/100, avg_epoch_loss=0.263]
2020-05-19 23:26:24,179 : INFO : Epoch[7] Elapsed time 2741.696 seconds
2020-05-19 23:26:24,179 : INFO : Epoch[7] Evaluation metric 'epoch_loss'=0.263290
2020-05-19 23:26:24,185 : INFO : Epoch[8] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:52<00:00,  1.45it/s, epoch=9/100, avg_epoch_loss=0.266]
2020-05-20 00:12:16,630 : INFO : Epoch[8] Elapsed time 2752.445 seconds
2020-05-20 00:12:16,630 : INFO : Epoch[8] Evaluation metric 'epoch_loss'=0.265814
2020-05-20 00:12:16,630 : INFO : Epoch[9] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:12<00:00,  1.44it/s, epoch=10/100, avg_epoch_loss=0.262]
2020-05-20 00:58:29,082 : INFO : Epoch[9] Elapsed time 2772.452 seconds
2020-05-20 00:58:29,082 : INFO : Epoch[9] Evaluation metric 'epoch_loss'=0.262476
2020-05-20 00:58:29,088 : INFO : Epoch[10] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:49<00:00,  1.45it/s, epoch=11/100, avg_epoch_loss=0.263]
2020-05-20 01:44:18,254 : INFO : Epoch[10] Elapsed time 2749.166 seconds
2020-05-20 01:44:18,254 : INFO : Epoch[10] Evaluation metric 'epoch_loss'=0.262678
2020-05-20 01:44:18,254 : INFO : Epoch[11] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:42<00:00,  1.46it/s, epoch=12/100, avg_epoch_loss=0.264]
2020-05-20 02:30:00,600 : INFO : Epoch[11] Elapsed time 2742.346 seconds
2020-05-20 02:30:00,600 : INFO : Epoch[11] Evaluation metric 'epoch_loss'=0.264322
2020-05-20 02:30:00,600 : INFO : Epoch[12] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:06<00:00,  1.45it/s, epoch=13/100, avg_epoch_loss=0.261]
2020-05-20 03:16:06,984 : INFO : Epoch[12] Elapsed time 2766.384 seconds
2020-05-20 03:16:06,984 : INFO : Epoch[12] Evaluation metric 'epoch_loss'=0.261147
2020-05-20 03:16:06,990 : INFO : Epoch[13] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:48<00:00,  1.46it/s, epoch=14/100, avg_epoch_loss=0.267]
2020-05-20 04:01:55,199 : INFO : Epoch[13] Elapsed time 2748.209 seconds
2020-05-20 04:01:55,200 : INFO : Epoch[13] Evaluation metric 'epoch_loss'=0.266566
2020-05-20 04:01:55,200 : INFO : Epoch[14] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:56<00:00,  1.45it/s, epoch=15/100, avg_epoch_loss=0.262]
2020-05-20 04:47:51,212 : INFO : Epoch[14] Elapsed time 2756.012 seconds
2020-05-20 04:47:51,212 : INFO : Epoch[14] Evaluation metric 'epoch_loss'=0.261856
2020-05-20 04:47:51,212 : INFO : Epoch[15] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:09<00:00,  1.44it/s, epoch=16/100, avg_epoch_loss=0.26]
2020-05-20 05:34:00,381 : INFO : Epoch[15] Elapsed time 2769.169 seconds
2020-05-20 05:34:00,381 : INFO : Epoch[15] Evaluation metric 'epoch_loss'=0.260374
2020-05-20 05:34:00,387 : INFO : Epoch[16] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:02<00:00,  1.45it/s, epoch=17/100, avg_epoch_loss=0.262]
2020-05-20 06:20:02,540 : INFO : Epoch[16] Elapsed time 2762.153 seconds
2020-05-20 06:20:02,540 : INFO : Epoch[16] Evaluation metric 'epoch_loss'=0.262312
2020-05-20 06:20:02,540 : INFO : Epoch[17] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:05<00:00,  1.45it/s, epoch=18/100, avg_epoch_loss=0.26]
2020-05-20 07:06:08,361 : INFO : Epoch[17] Elapsed time 2765.821 seconds
2020-05-20 07:06:08,361 : INFO : Epoch[17] Evaluation metric 'epoch_loss'=0.260257
2020-05-20 07:06:08,369 : INFO : Epoch[18] Learning rate is 0.0001
  0%|          | 13/4000 [00:10<54:16,  1.22it/s, epoch=19/100, avg_epoch_loss=0.269]2020-05-20 07:06:24,829 : WARNING : Epoch[18] gave nan loss
  0%|          | 13/4000 [00:16<1:24:07,  1.27s/it, epoch=19/100, avg_epoch_loss=0.271]
2020-05-20 07:06:24,829 : INFO : Loading parameters from best epoch (17)
2020-05-20 07:06:24,841 : INFO : Epoch[19] Learning rate is 5e-05
  0%|          | 0/4000 [00:00<?, ?it/s]2020-05-20 07:06:26,563 : WARNING : Epoch[19] gave nan loss
  0%|          | 0/4000 [00:01<?, ?it/s, epoch=20/100, avg_epoch_loss=0.232]
2020-05-20 07:06:26,564 : INFO : Stopping training
2020-05-20 07:06:26,564 : INFO : Loading parameters from best epoch (17)
2020-05-20 07:06:26,570 : INFO : Final loss: 0.2602571310577914 (occurred at epoch 17)
2020-05-20 07:06:26,576 : INFO : End model training

fernandocamargoai avatar May 20 '20 10:05 fernandocamargoai

Hello, @lostella. Sorry for taking this long.

I have 132131585 out of 143037878 (92.37%) of zeros.

To give you more information, this dataset is from a retailer with many branches and many products. I have all the sales registered for each item in each branch. Some of these items might be discontinued (like smartphones) after some time. So, each time-series may start on a different date, but I've normalized all of them to finish on the same date. So, despite the fact that it's really normal for time-series to have must of its values equal to zero, there's also those that will never sell anything again, that contribute to the zeroes.

fernandocamargoai avatar May 22 '20 17:05 fernandocamargoai

Did you try switching back to mxnet-cu101mkl==1.4.1 - I think with that mxnet version you can leave out the hybridize False option.

I just stumbled on this on my quest to get DeepAR converging somewhere in the M5 Kaggle challenge ... maybe it helps you @fernandocamargoti ?

In case this works for you too something between mxnet 1.4.1 and 1.6 broke ....

matthiasanderer avatar May 23 '20 12:05 matthiasanderer

So, each time-series may start on a different date, but I've normalized all of them to finish on the same date.

@fernandocamargoti as a side note: that that's not strictly needed, and may bias your predictions towards zero. Are you somehow indicating which values you have manually padded by adding some binary feature that indicates it?

lostella avatar May 23 '20 12:05 lostella

@fernandocamargoti also a minor note: the logs you pasted for hybridized=True suggest that you're using num_batches_per_epoch=4000, but you mentioned 10_000 (still, it seems to converge too slow even assuming that you used 10_000 for hybridized=False).

lostella avatar May 23 '20 12:05 lostella

@matthiasanderer, I'm running on IBM Power 9. If I'm not wrong, MKL is for Intel Processors.

fernandocamargoai avatar May 28 '20 18:05 fernandocamargoai

@lostella, thanks for your side note. But I also my predictions to indicate whether a time series will "die" (the product will lose all its demand). I haven't set up any binary feature to indicate it. If I had, how would I fill this feature in production? I mean, a product time series start dying, and at some point, no more sales are made. It would be hard to decide when to put set this flag to True in production.

fernandocamargoai avatar May 28 '20 18:05 fernandocamargoai

@fernandocamargoti also a minor note: the logs you pasted for hybridized=True suggest that you're using num_batches_per_epoch=4000, but you mentioned 10_000 (still, it seems to converge too slow even assuming that you used 10_000 for hybridized=False).

Yeah, it's another version where I filter time series with a minimum number of days with sales. Previously, I filtered with 10 and I increase to 30 to train faster. Then, I've updated the num_batches_per_epoch accordingly. Notice, though, that both logs have 4000.

fernandocamargoai avatar May 28 '20 18:05 fernandocamargoai

Oh I see, I hadn’t scrolled all the way to the right in the first log, sorry. I also agree on the zero padding you’re doing, since that’s a behavior you want to model 👍🏻

lostella avatar May 28 '20 18:05 lostella

@matthiasanderer, I'm running on IBM Power 9. If I'm not wrong, MKL is for Intel Processors.

Sorry that was my copy paste of course only version 1.4.1 is the relevant part ... I just used that cuda10.1/mkl version...

matthiasanderer avatar May 29 '20 07:05 matthiasanderer

@lostella, I think I've got a way for you guys to reproduce the problem. The same thing happens on m5 dataset (https://www.kaggle.com/allunia/m5-uncertainty).

To make it easier, I've already preprocessed the dataset, putting in the GluonTS format. It's available here: https://github.com/fernandocamargoti/m5_competition_gluonts_dataset

The cardinality is [5, 12, 5, 9, 3051]. It always has +2 of the number of the categories because I leave the first index for unknown and the second for None (there are some aggregated time series where some categorical variables aren't defined).

Also, I've been using my own implementation of Dataset instead of FileDataset from GluonTS. The reason is that the FileDataset was reading all the files to calculate the len() (which is called at the start of the training). It was taking a very long time. So, I wanted to start training and let the file be loaded only when necessary. To calculate the len(), I simply count the number of files, without reading them. So, here is my implementation: https://gist.github.com/fernandocamargoti/0f9c0b390ec44e0239a835cff91ae85c

To separate the train and test set, I use:

self._train_dataset = SameSizeTransformedDataset(
                self.dataset, transformations=[FilterTimeSeriesTransformation(start=0, end=split)])

self._test_dataset = SameSizeTransformedDataset(
                self.dataset, transformations=[FilterTimeSeriesTransformation(start=0, end=split + self.test_steps)])

The SameSizeTransformedDataset is also provided in the gist above. It's only used to avoid reading the whole dataset to calculate the len(), allowing me to lazy load it during the training.

fernandocamargoai avatar May 30 '20 12:05 fernandocamargoai

As suggested by @matthiasanderer, I've tried MXNet 1.4.1 and this weird bug didn't happen. I'll also try 1.5.1 and report back.

fernandocamargoai avatar Jun 01 '20 13:06 fernandocamargoai

The bug doesn't happen with MXNet 1.5.1. So, it's something that happened when 1.6.0 was introduced.

fernandocamargoai avatar Jun 02 '20 19:06 fernandocamargoai

@fernandocamargoti thanks for looking into it, this is really cool (but also not cool that the bug is there).

Did you observe that with 1.4 and 1.5 both NaN didn’t show up and convergence speed was unaffected by hybridize True/False?

lostella avatar Jun 02 '20 20:06 lostella

@lostella, both converge much faster and the NaN doesn't happen at all. But I've noticed that a weird bug happens with 1.5:

Traceback (most recent call last):
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/luigi/worker.py", line 199, in run
    new_deps = self._run_get_new_deps()
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/luigi/worker.py", line 141, in _run_get_new_deps
    task_gen = self.task.run()
  File "/home/fernandocamargo/datascience_workspace/log_inteligente_ai/gazin/task/model/gluonts.py", line 685, in run
    test_forecast = cast(SampleForecast, next(iter(forecast_it)))
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/predictor.py", line 330, in predict
    num_samples=num_samples,
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/forecast_generator.py", line 197, in __call__
    outputs = prediction_net(*inputs).asnumpy()
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/gluon/block.py", line 548, in __call__
    out = self.forward(*args)
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/gluon/block.py", line 925, in forward
    return self.hybrid_forward(ndarray, x, *args, **params)
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/deepar/_network.py", line 608, in hybrid_forward
    begin_states=state,
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/deepar/_network.py", line 541, in sampling_decoder
    new_samples = distr.sample(dtype=self.dtype)
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/distribution/neg_binomial.py", line 98, in sample
    s, mu=self.mu, alpha=self.alpha, num_samples=num_samples
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/distribution/distribution.py", line 339, in _sample_multiple
    samples = sample_func(*args_expanded, **kwargs_expanded)
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/distribution/neg_binomial.py", line 94, in s
    x = F.minimum(F.random.gamma(r, theta), 1e6)
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/ndarray/random.py", line 383, in gamma
    [alpha, beta], shape, dtype, ctx, out, kwargs)
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/ndarray/random.py", line 38, in _random_helper
    return sampler(*params, shape=shape, dtype=dtype, out=out, **kwargs)
  File "<string>", line 68, in _sample_gamma
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke
    ctypes.byref(out_stypes)))
  File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: vector::_M_range_insert

I happens both when I try to run the evaluation and if I load a model I try to use it. So, I'm sticking with 1.4.1 for now.

fernandocamargoai avatar Jun 02 '20 21:06 fernandocamargoai

@fernandocamargoti the issue you’re facing with mxnet 1.5 is a known one on Linux: https://github.com/apache/incubator-mxnet/issues/16135

This was fixed in 1.6 but the fix was not backported to 1.5 I believe. This is also why we skipped support for 1.5 in GluonTS, and directly jumped to 1.6

lostella avatar Jun 08 '20 16:06 lostella

@fernandocamargoti did you mention you were running on GPU? In that case this may be related: https://github.com/awslabs/gluon-ts/pull/728

lostella avatar Jun 08 '20 18:06 lostella

@lostella, it could definitely be related. But https://github.com/awslabs/gluon-ts/pull/728 only fixes for binned distribution, right? I guess the same problem happens with other distributions, like Negative Binomial that I'm using.

fernandocamargoai avatar Jun 15 '20 17:06 fernandocamargoai

I'm wondering whether this still shows up with MXNet 1.7? (since that fixed the bug concerning #728) @fernandocamargoti not sure you're able to verify this, otherwise we'll try to reproduce with your instructions above

lostella avatar Jan 12 '21 20:01 lostella

It still happens, @lostella. I've just updated both the MXNet to 1.7 and GluonTS to 0.6.4 and I'm getting NaN loss quite early in the training (like 6 epochs later):

100%|����������������������������������������������������������������������������������������������������������������������������������������������������������������| 18030/18033 [30:26<00:00,  9.98it/s, epoch=12/30, avg_epoch_loss=nan]2021-01-12 18:47:44,035 : WARNING : Batch [18031] of Epoch[11] gave NaN loss and it will be ignored
100%|����������������������������������������������������������������������������������������������������������������������������������������������������������������| 18031/18033 [30:26<00:00,  9.89it/s, epoch=12/30, avg_epoch_loss=nan]2021-01-12 18:47:44,134 : WARNING : Batch [18032] of Epoch[11] gave NaN loss and it will be ignored
2021-01-12 18:47:44,231 : WARNING : Batch [18033] of Epoch[11] gave NaN loss and it will be ignored
100%|����������������������������������������������������������������������������������������������������������������������������������������������������������������| 18033/18033 [30:26<00:00,  9.87it/s, epoch=12/30, avg_epoch_loss=nan]
2021-01-12 18:47:44,232 : INFO : Epoch[11] Elapsed time 1826.756 seconds
2021-01-12 18:47:44,232 : INFO : Epoch[11] Evaluation metric 'epoch_loss'=nan
2021-01-12 18:47:44,233 : INFO : Stopping training
2021-01-12 18:47:44,233 : INFO : Computing averaged parameters.
2021-01-12 18:47:44,237 : INFO : Loading averaged parameters.
2021-01-12 18:47:44,249 : INFO : End model training

fernandocamargoai avatar Jan 15 '21 11:01 fernandocamargoai

I'm using the DeepAR as follows:

Trainer(
    ctx=self.ctx,
    epochs=30,
    batch_size=32,
    num_batches_per_epoch=len(self.train_dataset) // self.batch_size,
    learning_rate=0.001,
    learning_rate_decay_factor=0.5,
    patience=3,
    minimum_learning_rate=5e-05,
    clip_gradient=1.0,
    weight_decay=1e-08,
    hybridize=True,
)

DeepAREstimator(
    freq="D",
    prediction_length=90,
    trainer=self.get_trainer(),
    context_length=90,
    num_layers=2,
    num_cells=200,
    cell_type="lstm",
    dropout_rate=0.1,
    use_feat_dynamic_real=True,
    use_feat_static_cat=True,
    cardinality=self.get_cardinality(),
    embedding_dimension=[
        2,
        7,
        10,
        20,
        5,
        20,
        30,
        30,
        30,
        50,
        3,
        30
    ],
    distr_output=NegativeBinomialOutput(),
    scaling=True,
    lags_seq=None
)

It's trained for demand forecasting. I also have a variant that is trained to forecast weekly instead of daily. Both of them have the same error.

fernandocamargoai avatar Jan 15 '21 11:01 fernandocamargoai

Hi,

I am having the similar problem as yours. Each row is a store and each column is a time index. Each row records the sales data of that store ID. Inside the dataset, we are having NAN and 0 (as shown in the first picture) but we are not sure that is the reason caused the problem as highlighted (as shown in the second picture). Or is it because we are using negative binomial and it is not able to handle the NANs.

Screen Shot 2021-01-27 at 2 18 47 AM IMG_9345

Thank you in advance for any help!!!

herahuier avatar Jan 26 '21 18:01 herahuier

Any news about this, @lostella? It's been a long time that I'm affected by this bug.

fernandocamargoai avatar Jan 29 '21 12:01 fernandocamargoai

@fernandocamargoti we still have to reproduce the issue on master. I’m trying with the m5 data that you provided, but for debugging purposes it would be nice to have a smaller (faster) way to do that. Will update in case I can reproduce it.

lostella avatar Jan 29 '21 15:01 lostella