LLM-Shearing icon indicating copy to clipboard operation
LLM-Shearing copied to clipboard

Why the rope params are ignored while converting hf checkpoint to composer checkpoint?

Open ZhiYuanZeng opened this issue 11 months ago • 3 comments

I found that the rope params are ignored in composer_to_hf.py and that the base of rope in composer_llama.py is set to be 10000 constantly. However, it is normal to tune the base of rope for the better long-context performance. Therefore, we need to set the rope params (inv_freq) in composer_to_hf.py?

ZhiYuanZeng avatar Mar 22 '24 03:03 ZhiYuanZeng

The rope is infact not trained and is fixed registerd buffer tensors. It is ok to apply the default settings of ROPE without any modifications.

zhangzhenyu13 avatar Mar 25 '24 01:03 zhangzhenyu13

Yes, the rope is parameter-free, but the base of rope is often tuned to support long-context extrapolation. The base of ComposerMosaicLlama is fixed to be 10000. This configuration works well for the standard LLama model, but it might not be correct for variants of LLama.

ZhiYuanZeng avatar Mar 25 '24 09:03 ZhiYuanZeng

But It is better to set the rope base from the config file, rather than loading from checkpoint.

I found that the rope params are ignored in composer_to_hf.py and that the base of rope in composer_llama.py is set to be 10000 constantly. However, it is normal to tune the base of rope for the better long-context performance. Therefore, we need to set the rope params (inv_freq) in composer_to_hf.py?

ZhiYuanZeng avatar Mar 25 '24 10:03 ZhiYuanZeng