fairseq
fairseq copied to clipboard
nllb 3.3B translate from Chinese to Korean got: , , , , , , , , , , , , , , ,
nllb 3.3B translate from Chinese to Korean got: , , , , , , , , , , , , , , ,
source_text: 哈哈哈哈哈 说的什么玩意儿呀 这个声音太尖了 耳朵有点受不了哎 translated_text: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
the following is my code: model_name = "facebook/nllb-200-3.3B" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForSeq2SeqLM.from_pretrained(model_name, trust_remote_code=True).cuda()
translator = pipeline(
'translation',
model=model,
tokenizer=tokenizer,
src_lang=zho_Hans,
tgt_lang=kor_Hang,
max_length=4096,
device='cuda'
)