chronos-forecasting icon indicating copy to clipboard operation
chronos-forecasting copied to clipboard

How to reproduce training and evaluation as done in the paper?

Open lostella opened this issue 1 year ago • 5 comments

Please check the updated README. We have also released an evaluation script and backtest configs to compute the WQL and MASE numbers as reported in the paper.

The scripts for training and evaluating Chronos models are included in the scripts folder, see also the README therein. The data used is available on the HuggingFace Hub.

lostella avatar Jul 15 '24 09:07 lostella

Hello!

First of all, thank you for your great work!

I want to replicate your fine-tuning results, where you fine-tuned the T5-small model independently for each zero shot dataset. However, I don’t see the option in the training configuration to fit one of the zero-shot datasets to the model by simply providing a reference to the dataset, as is done in the evaluation configuration.

So, I assume that I need to preprocess each dataset and convert it into Arrow arrays to make them suitable for the training pipeline, correct? If so, did you use the same hyperparameters (prediction horizon, etc.) for these datasets during fine-tuning as you did in the evaluation configuration?

Thanks in advance for your response!

ChernovAndrey avatar Aug 27 '24 21:08 ChernovAndrey

Hi @ChernovAndrey! That's correct. You will need to preprocess the dataset yourself. For per-dataset fine-tuning, we used the same parameters for prediction length as the evaluation config. Note that for training, you would need the training_dataset returned by gluonts.dataset.split.split:

https://github.com/amazon-science/chronos-forecasting/blob/eb7bdfc047de3e7af972b4ee7cf23a7968b7daa3/scripts/evaluation/evaluate.py#L225

Here the training_dataset is ignored via _ but you need that part for fine-tuning.

abdulfatir avatar Sep 09 '24 09:09 abdulfatir

This code seems to work to build the TSMixup data locally for training:

import datasets
from pathlib import Path
from typing import List, Optional, Union

import numpy as np
from gluonts.dataset.arrow import ArrowWriter


def convert_to_arrow(
    path: Union[str, Path],
    time_series: Union[List[np.ndarray], np.ndarray],
    start_times: Optional[Union[List[np.datetime64], np.ndarray]] = None,
    compression: str = "lz4",
):
    if start_times is None:
        # Set an arbitrary start time
        start_times = [np.datetime64("2000-01-01 00:00", "s")] * len(time_series)

    assert len(time_series) == len(start_times)

    dataset = [
        {"start": start, "target": ts} for ts, start in zip(time_series, start_times)
    ]
    ArrowWriter(compression=compression).write_to_file(
        dataset,
        path=path,
    )


# Get the HF dataset in their format
ds = datasets.load_dataset("autogluon/chronos_datasets", "m4_daily", split="train")
ds.set_format("numpy")
# Extract values
# start_times = [ds[i]['timestamp'] for i in range(len(ds))]
time_series_values = [ds[i]['target'] for i in range(len(ds))]
assert len(time_series_values) == len(ds)

convert_to_arrow("./tsmixup-data.arrow", time_series=time_series_values, start_times=None)

bfarzin avatar Sep 18 '24 15:09 bfarzin

Hi @ChernovAndrey! That's correct. You will need to preprocess the dataset yourself. For per-dataset fine-tuning, we used the same parameters for prediction length as the evaluation config. Note that for training, you would need the training_dataset returned by gluonts.dataset.split.split:

chronos-forecasting/scripts/evaluation/evaluate.py

Line 225 in eb7bdfc

_, test_template = split(gts_dataset, offset=offset) Here the training_dataset is ignored via _ but you need that part for fine-tuning.

training_dataset is not subscriptable

wwfcnu avatar Mar 13 '25 04:03 wwfcnu

Hi @abdulfatir, is there a script to reproduce the pre-training of chronos-bolt? we can see the model class but there is not pre-training script it seems

ievred avatar Aug 14 '25 14:08 ievred