LongLM
LongLM copied to clipboard
What effect on qwen1.5 will be if i use self-extend trick?
Thanks for your contribution on accommondating qwen on self-extend. Qwen1.5 has already been 32k context length. I'm wondering if i can use self-extend to make it to about 100K? Have you ever tested the effect on qwen1.5 using self-extend?
We believe how good is self extend highly depends on how good is the extended model within its original pretraining context window. This means if Qwen1.5's 32k context window is not well trained, SelfExtend may not work. Otherwise, it will work well. [Currently, we have no plan for a serious test considering the massive computational resource requirement: 32k 8x -> 256k, 4x -> 128k. We may do serious benchmarking for Qwen1.5 when we have enough resources.]
We believe how good is self extend highly depends on how good is the extended model within its original pretraining context window. This means if Qwen1.5's 32k context window is not well trained, SelfExtend may not work. Otherwise, it will work well. [Currently, we have no plan for a serious test considering the massive computational resource requirement: 32k 8x -> 256k, 4x -> 128k. We may do serious benchmarking for Qwen1.5 when we have enough resources.]
Ahhh, if I test it in the future work, i'll share it with you guys. Thanks for your reply~
Results on 128k length in str (around 70k as for qwen tokenizer). It seems work!
What is the scale base set to?