text-generation-inference
text-generation-inference copied to clipboard
beam search support
Feature request
Beam search is useful feature provided by transformer library, but it seem it is missing in TGI? Would it be supported?
Motivation
beam search would be helpful for response quality.
Your contribution
I'd have a try if this feature is implemented
Hi @leiwen83
Indeed beam search is not implemented however we have a different algorithm which seems to work just as good or even better.
best_of taking the best of n potential sampling replies: https://github.com/huggingface/text-generation-inference/issues/736#issuecomment-1658791383
Is that option what you could be looking for. It seems to perform better with current LLMs where sampling is better than greedy for most answers.
I vote for Beam Search. In the case of using Page Attention, Beam Search can share one Prifill operation and save computation with long prompts.
@jiguanglizipao I agree with you, it seems that the argument "best_of" does not provide good results. Moreover, in the case of my model, using "do_sample" leads to unwanted results
Would ge great to have. best_of is great but way to slow. With best_of=1 I have time_per_token="92.055402ms" With best_of=2 I have time_per_token="307.8662ms"
Beam search is much worse than best_of performance wise.
The timing difference you show here a surprisingly different. How did you measure (model, harward, where did you get the timing information from)?
@Narsil Thanks for your response. Probably you are right, just saying my observations so far.
The timing is from the docker container itself. It prints that after it generates text. More about my setup: Using the g2-standard-4 instance from GCP that has T4 GPU.
Starting the docker like this:
model=meta-llama/Llama-2-13b-chat-hf
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
token=$token
docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 4000:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.0.3 --model-id $model --quantize bitsandbytes-nf4 --max-input-length=4095 --max-total-tokens=4096 --trust-remote-code
Testing with that:
curl 127.0.0.1:4000 \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":2048, "temperature": 0.8, "best_of": 2, "do_sample": true}}' \
-H 'Content-Type: application/json'
Oh I see bnb-nf4 is just super slow on anything above batch_size=1.
It has nothing to do with best_of.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.