Christiaan Meijer
Christiaan Meijer
 from azure blog: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast?view=azureml-api-2 paper: https://arxiv.org/abs/1803.01271
Before training any model, we should shuffle the data. This is best practice and failing to do so can totally impair training.
According to https://mcfly.readthedocs.io/en/latest/reference.html?highlight=early#mcfly.train_models_on_samples, ```Unless ‘None’ early Stopping is used for the model training. Set to integer to define how many epochs without improvement to wait for before stopping. Default is...
See for a list of possible architectures on the bottom of p3 in Ismael et al - Benchmarking Deep Learning Interpretability in Time Series Predictions: https://proceedings.neurips.cc/paper/2020/file/47a3893cc405396a5c30d91320572d6d-Paper.pdf
We should drop 3.6 and include up to 3.10 (and maybe already 3.11?)
In find_best_architecture using this combination throws a cryptic error a user is probably not going to understand.
maybe check mexca or dianna-ai/dianna for possible examples