lag-llama
lag-llama copied to clipboard
different prerpint has different computing power (Summit supercomputer) and dataloader PatchTST
Hello, Dear developer.
Question about the computing power expended: in the 20 November, 2023 year preprint you write
We acknowledge the support from the Canada CIFAR AI Chair Program and from the Canada Excellence Research Chairs (CERC) Program. This research was made possible thanks to the computing resources on the Summit supercomputer, provided as a part of the INCITE program award “Scalable Foundation Models for Transferable Generalist AI”. These resources were provided by the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
in the latest version of the preprint you state
We elaborate on our training procedure in Appendix B. For all the models trained in this paper, we use a single Nvidia Tesla-P100 GPU with 12 GB of memory, 4 CPU cores, and 24 GB of RAM
Could you explain this difference?
I also wanted to ask you if your model's prediction would be affected by the data_loader from the PatchTST repository (https://github.com/yuqinie98/PatchTST/tree/main/PatchTST_supervised/data_provider)?