Using comet-mbr for Multi-Model Translation Ranking: Questions About Input Format and GPU Disabling
I'm attempting to utilize the following command: comet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt --num_sample [X] -o [OUTPUT_FILE].txt for the purpose of ranking multi-model translations. However, I have a couple of questions:
Could you provide clarification on the expected format of the [MT_SAMPLES].txt file? How should the distinct translations be structured within it? I tried disabling the GPU using the --gpus=0 option, but it appears to be ineffective. Is there a different approach I should take to achieve this?
- OS: iOS
- Packaging pip
Hey! There is a bit more information about data format here.
You are right, the MBR command is hardcoded to use a single GPU.
I assigned a bug label to this issue until its solved.
Thanks, @ricardorei! I followed the same approach a few days back. I locally commented out this line because the flag wasn't being considered in the code. Everything worked perfectly after that.
Another suggestion if possible it to add the data format to the readme.