CTranslate2 icon indicating copy to clipboard operation
CTranslate2 copied to clipboard

Unexpected inference results from Flan-T5 XXL converted to ctranslate2 with version 4.2.1 and 4.1.1 (using tensor parallel)

Open gk-kd opened this issue 9 months ago • 4 comments

I'm using the Flan-t5 XXL of the shelf model in our project and for deployment we have converted it to ctranslate2 version using following command ct2-transformers-converter --model ~/input_folder/ --output_dir ~/flant5_ct2/

Now I'm hosting the model as gRPC server, while starting under tensor parallel mode like ctranslate2.Translator(checkpoint_path, device="cuda", tensor_parallel=True)

I started the server with mpirun with 2 instances to allow tensor parallel to kick in. This works well and model is loaded evenly across to 2 GPUs mpirun -n 2 python model_server.py

Now when I run inference on it, it returns following result as response to my prompt ("Who is president of united states?") "< pad >< pad >< pad >< pad >< pad >< pad >"

Now this is a strange behaviour which happens only with ctranslate2==4.2.1

Some suggestions to fix it would really helpful here.

gk-kd avatar May 02 '24 04:05 gk-kd

Do you have the same behavior with ctranslate2 4.1.1?

minhthuc2502 avatar May 02 '24 08:05 minhthuc2502

Do you have the same behavior with ctranslate2 4.1.1?

No it works fine with 4.1.1, but results are different between "with tensor parallel" and "without tensor parallel". I saw in 4.2.0 that some bugs have been fixed related to tensor parallel, so tried upgrading but ran into this different issue

Btw the response is like this

< pad >< pad >< pad >< pad >< pad >< pad >

I tried different quantization types like bfloat16, float16 etc... but nothing seems to work

gk-kd avatar May 02 '24 10:05 gk-kd

I also experienced an issue wtih 4.2.1 Translator. The inference with Translator with 4.2.1 produced poor results, I didn't inspect the output itself, I just looked on my metrics which dropped to zero. This didn't happen on 4.1.1 or 3.24.0

I thought about reconverting my models using the 4.2.1 version converter (I used the 3.24.0 version converter to generate the Translators I'm using), but didn't have the time to do it yet.

anterart avatar May 06 '24 15:05 anterart

I am also seeing this regression for all variants of Flan-T5 (base, large, XL). Model is just outputting <pad> repeatedly. We convert correctly to use bfloat16 as it is a known issue with T5 to use any other precision. We reverted back to 3.24.1. Performing inference without tensor parallelism, just a single GPU.

kkoehncke avatar May 06 '24 18:05 kkoehncke