Olivier Sprangers

Results 191 comments of Olivier Sprangers

Options (outside of the possible M4 GPU issue): 1. Try using `n_block=2` in TSMixer, your TSMixer model is huge (17.7M parameters). 2. Set windows_batch_size=32, inference_windows_batch_size=32. 3. Remove the static_df. 4....

> With n_blocks=2 it does change the num of params but the error of buffer size still remains. I could try another model, but i believe the buffer size problem...

Thanks for the clear overview of the issue! The suggested workaround makes sense, although I'm thinking whether it should be: `min_samples = self.h + step_size * self.prediction_intervals.n_windows` assuming we properly...

Edit: correction. The horizon of the lowest level can't be lower than the maximum aggregation - I should add protection around that. The solution is that you should just set...

Hi, that's unfortunate. Did you try the [irregular timestamps tutorial](/docs/capabilities-forecast-irregular_timestamps#3-1-load-data)?

Which backend is this? Pandas often has inefficient allocations, but Polars may have categorical issues when using version>=1.32. Can you post: 1. The code that reproduces the issue 2. The...

Ok, thanks. I think it's a Pandas issue... not something we can fix unfortunately but I'll see if I can reproduce and post it to the relevant package.

Closing, this is a Pandas issue that we cannot fix. Solution is to use Polars.

@bstewart311 What version of NF are you using? Can you try with the latest version?

@bstewart311 Thanks! I haven't had any issues with multi-gpu training on aws with NF 3.0.0, but I didn't try Auto models yet. Can you try with Optuna instead of Ray?...