Alpaca-CoT icon indicating copy to clipboard operation
Alpaca-CoT copied to clipboard

Would you like regularly release newly trained model checkpoint?

Open yhyu13 opened this issue 1 year ago • 2 comments

Hi,

I have just checkout out the HF repo for this project. I seems most models are not updated for a week.

msedge_hLgqbxeTIi

I know FT would cost time. I wonder if you can release a schedule for regularly updating fined tuned models based on latest dataset included in this project? Or does it mean releaseing your own fine tuned ckpt is no longer within the scope of this project?

Thanks!

yhyu13 avatar Apr 07 '23 02:04 yhyu13

We will interview the intentions of followers later and select the 5 to 10 combinations of 'data + llm' that have the highest attention, followed by completing the finetuning and open source checkpoint. In the long run, we will open source all possible checkpoints. Please follow us for a long time.

PhoebusSi avatar Apr 07 '23 06:04 PhoebusSi

强烈建议发布,目前hf极度缺乏这种不同时间周期的连续训练模型,而这种模型和数据,是很多llm优化项目说需要的。 在《zero-lora零训练llm调参算法》当中,其中的一个关注要点就是: https://github.com/ziwang-com/zero-lora 基于时间(不同训练周期检查点)、空间(不同token权重对比)、深度(不同模型的tok权重映射)等多种维度的lora权重优化体系。

ziwang-com avatar May 20 '23 23:05 ziwang-com