fastertransformer_backend icon indicating copy to clipboard operation
fastertransformer_backend copied to clipboard

How to support different models with different tensor_para_size?

Open TopIdiot opened this issue 2 years ago • 29 comments

I have 4 GPUs and 3 models called small, medium and large. I want to deploy small model on GPU 0, medium model on GPU 1, and large model on GPU 2 and GPU3 with tensor_para_size=2 due to large model is too huge that cannot be placed on single GPU.

However, the instance_group can only be KIND_CPU, so I can do nothing about it.

Is there any way to handler this?

TopIdiot avatar Nov 04 '22 06:11 TopIdiot

Besides, I tried 'mpirun -n 1 /opt/tritonserver/bin/tritonserver' three times with different CUDA_VISIBLE_DEVICES, server port and model-repository . However, that doesn't work, the processes was blocked when loading models.

TopIdiot avatar Nov 04 '22 06:11 TopIdiot

You should launch three tritonserver, first one use CUDA_VISIBLE_DEVICES=0, second one uses CUDA_VISIBLE_DEVICES=1, third one uses CUDA_VISIBLE_DEVICES=2,3. They may need to use different configuration and set with different names.

byshiue avatar Nov 04 '22 06:11 byshiue

You should launch three tritonserver, first one use CUDA_VISIBLE_DEVICES=0, second one uses CUDA_VISIBLE_DEVICES=1, third one uses CUDA_VISIBLE_DEVICES=2,3. They may need to use different configuration and set with different names.

@byshiue I did so, but it still cannot work. I use supervisord to run tritonserver. CUDA_VISIBLE_DEVICES is set in program environment section

Here is the meidum model output: At first time: image

After the process break down, the supervisord start it again:

The sencond time: image And then it blocked.

However, the nvidia-smi shows the model is ready image

image

TopIdiot avatar Nov 04 '22 07:11 TopIdiot

I cannot see the results of first time. Can you post again?

byshiue avatar Nov 04 '22 07:11 byshiue

@byshiue I am sorry, I place "the sencond time" in wrong area. Now it is ok

TopIdiot avatar Nov 04 '22 07:11 TopIdiot

@byshiue From the log, It seems like only one process can load model, and others would be block. But the one which can load all models cannot work,too

TopIdiot avatar Nov 04 '22 07:11 TopIdiot

The error is PTX compiled with an unsupported toolchain. You don't load any model successfully. What's docker version do you use?

byshiue avatar Nov 04 '22 07:11 byshiue

@byshiue Docker version 20.10.21

TopIdiot avatar Nov 04 '22 07:11 TopIdiot

@byshiue But when there is only one tritonserver, it works fine.

TopIdiot avatar Nov 04 '22 07:11 TopIdiot

Can you post your result one by one? What happen when you launch first one, and what happen when the second one?

From the graph you post, the first launch is fail.

And what the docker image you use?

byshiue avatar Nov 04 '22 07:11 byshiue

Here is my supervisord: image

In production, I use 3 models, medium(on gpu 2), large(on gpu 3) and xl(on gpu 0, 1) the meidum log: First time: image Second time: image

the large model log: image image

the xl model log: image

TopIdiot avatar Nov 04 '22 08:11 TopIdiot

Sorry, can you refine your format? It is too chaos to read now.

byshiue avatar Nov 04 '22 08:11 byshiue

@byshiue sorry, I reformat it

TopIdiot avatar Nov 04 '22 08:11 TopIdiot

What's the meaning of "second time" for medium log? Do you re-launch again and first time crash, but second time works? Do you check that you have clean all old processes?

What happen when you only launch one sever each time for these three models?

byshiue avatar Nov 04 '22 08:11 byshiue

@byshiue Yes. After the first time the medium break down, supervisord restart it automaticly, and the second time seems good at beginning, but blocked then.

I also tried this shell: image The processes are also blocked.

TopIdiot avatar Nov 04 '22 08:11 TopIdiot

Can you try to start only one model each time for these three cases?

byshiue avatar Nov 04 '22 08:11 byshiue

@byshiue Did you mean that I start the three models one by one ?

TopIdiot avatar Nov 04 '22 08:11 TopIdiot

Yes.

byshiue avatar Nov 04 '22 08:11 byshiue

@byshiue The first model is working fine: image

but when I run the second one, it blocked: image

And there is a Z process. I don't know if it dose matter. image

TopIdiot avatar Nov 04 '22 08:11 TopIdiot

I mean that only launch one process each time. When you launch the second server, you should kill the first one.

byshiue avatar Nov 04 '22 08:11 byshiue

@byshiue At that condition, all models work fine.

TopIdiot avatar Nov 04 '22 08:11 TopIdiot

@byshiue It seems that if /opt/tritonserver/backends/python/triton_python_backend_stub is still running, the new tritonserver must blocked. If I killed it, the new tritonserver can work fine.

TopIdiot avatar Nov 04 '22 09:11 TopIdiot

Can you try adding the verbose like tritonserver --log-verbose 1 --model-repository=<your_model>?

byshiue avatar Nov 04 '22 09:11 byshiue

@byshiue image

the second model blocked at this

TopIdiot avatar Nov 04 '22 09:11 TopIdiot

Can you try to only launch the fastertransformer, but exclude the pre/post processing?

byshiue avatar Nov 04 '22 09:11 byshiue

@byshiue Now all the processes started, but I don't know why. The pre/post processing code is based on https://github.com/triton-inference-server/fastertransformer_backend/tree/main/all_models/gptneox, the only thing I did is change the tokenizer to my own

TopIdiot avatar Nov 04 '22 09:11 TopIdiot

Can you launch the server with original pre/post processing?

byshiue avatar Nov 04 '22 09:11 byshiue

@byshiue Yes, it works... but I don't know why. My only change is to use huggingface transfomers.T5Tokenizer to replace original tokenizer

TopIdiot avatar Nov 04 '22 09:11 TopIdiot

@TopIdiot @byshiue Hi, there. I have same problem when I use multiple triton server to loading different models with different GPUs. Any update of this issue? Tokenizer is huggingface's tokenizer (AutoTokenizer), model is bloom. My situation is all models are loaded to GPU, but when I send gprc request, the triton and log are just stuck and show nothing.

calico-niko avatar Jun 15 '23 11:06 calico-niko