dingdongwang
dingdongwang
Got it, thank you so much!
Thank you for your reply! May I kindly ask what is the FT training loss and your training para sets (mainly for the epoch) for LTU-AS based on the toy...
Thank so much for your reply! I have another question about [finetune.py](https://github.com/YuanGongND/ltu/blob/4589490e23f4fc5cb970b22a98a123688bbaa419/src/ltu/finetune.py) code line 91: ``` # trick to load checkpoints correctly from HF if '../../../pretrained_mdls/vicuna_ltuas/' not in base_model: #...
Thank You so much for your reply! Really appreciate it!
Thanks for your reply! The data I used is the provided toy data. And the rvam of 3090 is 24G.
Bug fixed, thank you!
Got it! Thank you!