neuralforecast
neuralforecast copied to clipboard
[Models] Evaluate `torch.compile` on `nf` models
Description
The torch.compile
method has been introduced in PyTorch 2.0 and is aimed at optimizing and accelerating the execution of PyTorch models. The NeuralForecast library, which leverages PyTorch for time series forecasting, has the potential to benefit from this new feature. However, it is essential to thoroughly test the torch.compile
method to ensure its compatibility, effectiveness, and performance gains.
We kindly request users and contributors to test the torch.compile
method with NeuralForecast models and provide feedback on their experiences. The feedback will help us assess the viability and potential improvements of utilizing this method in the library.
Testing Guidelines:
- Fork or clone the repository.
- Create a new branch to work on your tests:
git checkout -b torch-compile-testing
- Implement the necessary changes to integrate the
torch.compile
method into nf models (see https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html). - Choose time series datasets from
datasetsforecast
package (tutorials on how to download are available in our documentation) or use other publicly available datasets. - Train
nf
models using thetorch.compile
method and evaluate their performance against the original models without compilation. - If you encounter any issues, bugs, or unexpected behavior related to the
torch.compile
method, please create a new issue, providing detailed steps to reproduce the problem.
Feedback and Reporting: We encourage you to share your findings, observations, and any potential issues or improvements discovered during the testing process in this issue or in our slack channels.
Acknowledgment:
We highly appreciate your time and effort in testing the torch.compile
method with nf
models. Your feedback will contribute to the improvement of the library and the PyTorch ecosystem as a whole. Thank you for your valuable support!
Please feel free to reach out if you have any questions or need further assistance.
@quest-bot stash 400
New Quest!
A new Quest has been launched in @Nixtla’s repo. Merge a PR that solves this issue to loot the Quest and earn your reward.
Loot of 400 USD has been stashed in this issue to reward the solver!
🗡 Comment @quest-bot embark
to check-in for this Quest and start solving the issue. Other solvers will be notified!
⚔️ When you submit a PR, comment @quest-bot loot #626
to link your PR to this Quest.
Questions? Check out the docs.
@quest-bot embark
Hi @cchallu,
I am excited to get started with my first issue on neuralforecast
. Some quick questions before taking a deep dive:
- I planned to run the evaluation on my Mac M1 2020. However, while running the first few experiments given in
torch.compile()
default notebook, I found thattorch.complie()
was not even able to beat theeager
performance. Should I continue my efforts with Mac or try to find a GPU machine to evaluate this?
@patel-zeel has embarked on their Quest 🗡
- @patel-zeel has been on GitHub since 2020.
- They have merged 77 public PRs in that time.
- Their swords are blessed with
Python
andJupyter Notebook
magic ✨ - They haven't contributed to this repo before.
Questions? Check out the docs.
Hi @patel-zeel. Sorry for the delay on the answer. Yes, we would like to understand if it also improves on GPU. Can you try using Colab? We can add the cost of the GPU to the reward.
Hi @cchallu, thank you for confirmation. I am planning to evaluate this on Nvidia Quadro RTX 5000 (16 GB). To bring in an essential detail in conversation, the following note is mentioned in torch.compile
tutorial.
NOTE: a modern NVIDIA GPU (H100, A100, or V100) is recommended for this tutorial in order to reproduce the speedup numbers shown below and documented elsewhere.
Is neuralforecast
's goal to leverage torch.complie
for most of the GPU cards or only high-end GPUs like H/A/V100? What'd be the next step if the speedup is not possible with common GPUs (other than H/A/V100)?
Swords down 🗡
❌ The Quest on issue #626 has been aborted by @Nixtla
Head over to https://quine.sh/quests/solver to find other active Quests.
cc @patel-zeel
This Quest has been closed ⚔️.
Questions? Check out the docs