fairseq
fairseq copied to clipboard
score calculation in translation_moe
❓ Questions and Help
Before asking:
- search the issues.
- search the docs.
What is your question?
for EXPERT in $(seq 0 2); do \
cat wmt14-en-de.extra_refs.tok \
| grep ^S | cut -f 2 \
| fairseq-interactive $DATA_DIR/ --path $MODELDIR \
--beam $BEAM_SIZE \
--bpe subword_nmt --bpe-codes $BPE_CODE \
--source-lang $SRC \
--target-lang $TGT \
--task translation_moe --user-dir examples/translation_moe/translation_moe_src \
--method hMoElp --mean-pool-gating-network \
--num-experts 3 \
--batch-size $DECODER_BS \
--buffer-size $DECODER_BS --max-tokens 6000 \
--remove-bpe \
--gen-expert $EXPERT ; \
done > wmt14-en-de.extra_refs.tok.gen.3experts
python examples/translation_moe/score.py \
--sys wmt14-en-de.extra_refs.tok.gen.3experts \
--ref wmt14-en-de.extra_refs.tok
This is the command I used, as u see I add --remove-bpe but the output log still is:
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
pairwise BLEU: 48.35
#refs covered: 2.63
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
That's 100 lines that end in a tokenized period ('.')
It looks like you forgot to detokenize your test data, which may hurt your score.
If you insist your data is detokenized, or don't care, you can suppress this message with the force parameter.
average multi-reference BLEU (leave-one-out): 37.67
I don't know what should I do, please help me.
Code
What have you tried?
What's your environment?
- fairseq Version (e.g., 1.0 or main):main
- PyTorch Version (e.g., 1.0)2.0.0
- OS (e.g., Linux):Linux
- How you installed fairseq (
pip, source):source - Build command you used (if compiling from source):pip install --editable ./
- Python version:3.10.15
- CUDA/cuDNN version:12.0
- GPU models and configuration:A 800
- Any other relevant information:
I saw Inconsistent input for score calculation in translation_moe [#2277](url), and add --remove-bpe, but it doesn't work.
Maybe I need to change score.py?