Lorenzo Stella
Lorenzo Stella
@shchur one observation: setting `hybridize=False` in the `Trainer` also makes the problem disappear.
Related: - https://github.com/apache/incubator-mxnet/issues/16736 - https://github.com/apache/incubator-mxnet/issues/20702
@Poulami-Sarkar hi, thanks for raising the issue! I cannot reproduce the issue locally. For brevity, I'm running the example using `epochs = 3` in the estimator, and reducing the test...
@Poulami-Sarkar looks like there are some issues with multiprocessing. You can try two things to fix this: Running the following on top of your script/notebook ``` import multiprocessing multiprocessing.set_start_method('fork') ```...
Two early observations: - the `gluonts.torch.distributions` (with an `s`) is there (since yesterday: so close!) so you could use that - I would avoid adding scripts to the `examples` folder,...
Hi @shubhashish1 it's very hard to understand from the screenshot where the error may be coming from. Could you please provide a running example, possibly with fake data, to reproduce...
This is nice and should make many of the classes (especially estimators) more concise. I'm wondering how much of a burden you think maintaining our own `dataclass` decorator will be?
Thanks for starting this! (And for the related deep dive in #1936!) > The changes "just work" on my setup. I understand the changes to the splitters, which make room...
Thanks @karthickgopalswamy! In practice this may not constitute an issue, but I agree this could be made more robust. I think the fix could be the same as it’s done...
@kashif I think I would go for the "epsilon" change for the time being. Mainly for consistency with mxnet-based models, which I would not update for backward-compatibility just yet (turning...