Jasper
Jasper
Maybe something we can do is to document this better. So for each metric using `mean` or `median` we could highlight which one is used and why.
One problem that I see is that it's very hard to gracefully deprecate the current `.predict` API. Instead we could have `predict_one`, `predict_batch`, `predict_all` and then think about how we...
I guess we can have both, `predict_batch` and `predict_batches`. For example, we could use something like this: ```py def predict_batch(self, batch): with mx.Context(self.ctx): return self.prediction_net.forecast(batch) ```
> @jaheba in practice, we would provide a way to associate a semantics to each attribute in the data, and have a transformation set up automatically that maps the data...
The `ParquetFile` abstraction should be pretty generic, not imposing any fields. However, `FileDataset` does have the assumption of what columns are named. Generally supporting is something we are looking into.
/cc @Schmedu @lostella
Code for `re_export`: ```python import inspect def re_rexport(obj): context = inspect.stack()[1] module = inspect.getmodule(context[0]) obj_path = obj.__module__.split(".") module_path = module.__name__.split(".") assert obj_path[: len(module_path)] == module_path obj.__module__ = module.__name__ return obj...
> I agree fully on using short imports in the documentation, tests and throughout the package. What would be the benefit of the serde story? Avoid breaking objects when a...
After working more intensely with sphinx, we should make this a priority. One thing sphinx does is to cross-link references. For example, we document create a documentation entry for `gluonts.mx.Trainer`....
Looks like this broke some stuff :D