wenHK
wenHK
> execute.py中117行优化器优化的都是encoder >  请问解决了吗
求解
> Hi, I downloaded the dictionary and 600M NLLB-200-Distilled checkpoint. I failed to load model weights from the checkpoint due to inconsistent vocabulary size. > > The dictionary has 255997...
> 您好,我参考知乎进行微调【[https://zhuanlan.zhihu.com/p/353070757】,使用的vocab.bpe.32000代替600,checkpoint_best.pt使用的mRASP-PC32-6enc6dec.pt,在执行export](https://zhuanlan.zhihu.com/p/353070757%E3%80%91%EF%BC%8C%E4%BD%BF%E7%94%A8%E7%9A%84vocab.bpe.32000%E4%BB%A3%E6%9B%BF600%EF%BC%8Ccheckpoint_best.pt%E4%BD%BF%E7%94%A8%E7%9A%84mRASP-PC32-6enc6dec.pt%EF%BC%8C%E5%9C%A8%E6%89%A7%E8%A1%8Cexport) CUDA_VISIBLE_DEVICES=1 export EVAL_GPU_INDEX=${eval_gpu_index} bash PROJECTROOT/train/fine−tune.sh{PROJECT_ROOT}/experiments/example/configs/train/fine-tune/en2de_transformer_big.yml ${PROJECT_ROOT}/experiments/example/configs/eval/en2de_eval.yml 时,屏幕打印出如下错误: Usage: sacremoses tokenize [OPTIONS] Try 'sacremoses tokenize -h' for help. Error: No such option: -l sacreBLEU: System and reference streams have...
 我也遇到一样的问题