text-generation-inference icon indicating copy to clipboard operation
text-generation-inference copied to clipboard

beam search support

Open leiwen83 opened this issue 2 years ago • 7 comments
trafficstars

Feature request

Beam search is useful feature provided by transformer library, but it seem it is missing in TGI? Would it be supported?

Motivation

beam search would be helpful for response quality.

Your contribution

I'd have a try if this feature is implemented

leiwen83 avatar Jul 28 '23 08:07 leiwen83

Hi @leiwen83

Indeed beam search is not implemented however we have a different algorithm which seems to work just as good or even better.

best_of taking the best of n potential sampling replies: https://github.com/huggingface/text-generation-inference/issues/736#issuecomment-1658791383

Is that option what you could be looking for. It seems to perform better with current LLMs where sampling is better than greedy for most answers.

Narsil avatar Jul 31 '23 17:07 Narsil

I vote for Beam Search. In the case of using Page Attention, Beam Search can share one Prifill operation and save computation with long prompts.

jiguanglizipao avatar Aug 01 '23 11:08 jiguanglizipao

@jiguanglizipao I agree with you, it seems that the argument "best_of" does not provide good results. Moreover, in the case of my model, using "do_sample" leads to unwanted results

Quang-elec44 avatar Aug 24 '23 02:08 Quang-elec44

Would ge great to have. best_of is great but way to slow. With best_of=1 I have time_per_token="92.055402ms" With best_of=2 I have time_per_token="307.8662ms"

PawelFaron avatar Sep 18 '23 19:09 PawelFaron

Beam search is much worse than best_of performance wise.

The timing difference you show here a surprisingly different. How did you measure (model, harward, where did you get the timing information from)?

Narsil avatar Sep 19 '23 07:09 Narsil

@Narsil Thanks for your response. Probably you are right, just saying my observations so far.

The timing is from the docker container itself. It prints that after it generates text. More about my setup: Using the g2-standard-4 instance from GCP that has T4 GPU.

Starting the docker like this:

model=meta-llama/Llama-2-13b-chat-hf
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
token=$token

docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 4000:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.0.3 --model-id $model --quantize bitsandbytes-nf4 --max-input-length=4095 --max-total-tokens=4096 --trust-remote-code

Testing with that:

curl 127.0.0.1:4000 \
    -X POST \
    -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":2048, "temperature": 0.8, "best_of": 2, "do_sample": true}}' \
    -H 'Content-Type: application/json'

PawelFaron avatar Sep 19 '23 13:09 PawelFaron

Oh I see bnb-nf4 is just super slow on anything above batch_size=1.

It has nothing to do with best_of.

Narsil avatar Sep 19 '23 15:09 Narsil

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Apr 19 '24 01:04 github-actions[bot]