LLM4Decompile icon indicating copy to clipboard operation
LLM4Decompile copied to clipboard

Training budget estimation

Open QiuJYWX opened this issue 1 year ago • 3 comments

We trained our model using AnghaBench compilation results across four optimization levels (O0~O3), selecting samples under 1024 tokens. That gave us a total of 534,564 samples per level, and we trained for 2 epochs on a cluster of 8 Nvidia A100 GPUs.

As for the training times, they were 10 hours for the 1.3B model, 85 hours for the 6.7B model, and 440 hours for the 33B model.

Let me know if you need more info!

Originally posted by @rocky-lq in https://github.com/albertan017/LLM4Decompile/issues/3#issuecomment-2002900929

Hi @rocky-lq @albertan017 ,

We are estimating the training budget of reproducing LLM4Decompile. In your previous issue response, _given 534,564 samples per level and a cluster of 8 Nvidia A100 GPUs, 10 hours were cost for the 1.3B model, 85 hours were cost for the 6.7B model, and 440 hours were cost for the 33B model _.

In the 19 june updated paper, fine-tuning the 1.3B and 6.7B LLM4Decompile-End takes 12 and 61 days on 8×A100 respectively given 7.2 million compilable samples and 1.6 million executable samples. There is some confusion about training budget estimation.

Would you please provide more information about training budget and are all the training are fully supervised finetuning?

QiuJYWX avatar Jun 27 '24 03:06 QiuJYWX

In V1, the maximum sequence length is set at 1,024, whereas in Version 1.5 it is increased to 4,096. The computational expenses rise quadratically (theoretically for attention calculation, in practice with acclerations may not be than much) relative to the sequence length. V2 also uses a larger dataset (undergone significant deduplication), these factors collectively lead to a 30x increase in training costs.

albertan017 avatar Jun 27 '24 06:06 albertan017

Are you training on a single node or multiple nodes out of interest?

cmberryau avatar Jul 31 '24 06:07 cmberryau

Are you training on a single node or multiple nodes out of interest?

For the 1B model, we use a single node. For larger models, they are typically trained across multiple nodes (6B can still be trained on a single node, depending on the budget).

albertan017 avatar Jul 31 '24 06:07 albertan017