fairseq
fairseq copied to clipboard
The result of the fine-tuned model inference is missing the second half and is represented by the symbol"�"
I use the Bart.base and Bart.large models for fine-tuning, what the fine-tuned model decoded is shown below,
What have you tried? I have tried the --max-tokens token size used during training is 4096/2048/928. The inference outcome of the fine-tuned model is the same as above. And I have considered the inference parameters of the fine-tuned model, like beam from 10 to 5, like Max-tokens from 4096/2048/928, but the effort was invalid.
What's your environment? fairseq Version (e.g., 1.0 or main): main PyTorch Version (e.g., 1.8): 1.12.1 OS (e.g., Linux): Linux How you installed fairseq (pip, source): pip Build command you used (if compiling from source): pip Python version: 3.7 CUDA/cuDNN version: 11.3