text-generation-inference icon indicating copy to clipboard operation
text-generation-inference copied to clipboard

OpenAI supports `top_p = 0.0` and `top_p = 1.0` but TGI fails with a validation error with either of these values.

Open michael-newsrx opened this issue 1 year ago โ€ข 4 comments

System Info

  • docker image: ghcr.io/huggingface/text-generation-inference:2.0.2
  • docker image: ghcr.io/huggingface/text-generation-inference:2.1.1

Information

  • [X] Docker
  • [ ] The CLI directly

Tasks

  • [X] An officially supported command
  • [ ] My own modifications

Reproduction

Fails

ep1: InferenceEndpoint = inference_endpoint1()
    while ep1.status != "running":
        if ep1.status == "failed":
            raise RuntimeError(f"Failed to create inference endpoint: {ep1.name}")
        ep1.wait(timeout=1)

    import openai
    client = openai.OpenAI(  #
            base_url=ep1.url + "/v1",  #
            api_key=hf_bearer_token(),  #
    )

    # print(f"Available models: {client.models.list()}")
    role_system = {"role": "system", "content": "I am an evil robot overlord."}
    role_user = {"role": "user", "content": "What is your command? Be very succinct."}
    chat_completion = client.chat.completions.create(model="tgi",  #
                                                     messages=[role_system, role_user],  #
                                                     stream=True,  #
                                                     max_tokens=1024,  #
                                                     temperature=0.0,  #
                                                     top_p=1.0,  #
                                                     )

Works

ep1: InferenceEndpoint = inference_endpoint1()
    while ep1.status != "running":
        if ep1.status == "failed":
            raise RuntimeError(f"Failed to create inference endpoint: {ep1.name}")
        ep1.wait(timeout=1)

    import openai
    client = openai.OpenAI(  #
            base_url=ep1.url + "/v1",  #
            api_key=hf_bearer_token(),  #
    )

    # print(f"Available models: {client.models.list()}")
    role_system = {"role": "system", "content": "I am an evil robot overlord."}
    role_user = {"role": "user", "content": "What is your command? Be very succinct."}
    chat_completion = client.chat.completions.create(model="tgi",  #
                                                     messages=[role_system, role_user],  #
                                                     stream=True,  #
                                                     max_tokens=1024,  #
                                                     temperature=0.0,  #
                                                     top_p=0.99,  #
                                                     )

Expected behavior

See also: https://github.com/huggingface/text-generation-inference/issues/1896 where the patch did not address this issue even though raised as part of the ticket.

Impact

This generally breaks libraries like guidance where the library is hard coded to use top_p=1.0 for the OpenAI interface.

michael-newsrx avatar Jul 11 '24 18:07 michael-newsrx

https://github.com/huggingface/text-generation-inference/blob/8511669cb29115bdf0bc2da5328e69d041030996/router/src/validation.rs#L248-L255

If you want to set top_p to 1.0, you can simply sending the top_p as none, which will result in the default value of 1.0 being applied.

It seems like the equal condition in the code is causing an error.

IQ179 avatar Jul 12 '24 02:07 IQ179

This doesn't provide a resolution to the issue.

The docker container rejects top_p=1.0 for OpenAI interface, but OpenAI interface should accept top_p=1.0 and not fail.

Is there an easy way to "patch" the container and deploy using the patched version?

https://github.com/huggingface/text-generation-inference/blob/8511669cb29115bdf0bc2da5328e69d041030996/router/src/validation.rs#L248-L255

If you want to set top_p to 1.0, you can simply sending the top_p as none, which will result in the default value of 1.0 being applied.

It seems like the equal condition in the code is causing an error.

See also: https://github.com/guidance-ai/guidance/issues/945

michael-conrad avatar Jul 12 '24 19:07 michael-conrad

Hi @michael-newsrx

Thank you for bringing this to our attention and for making the PR ๐Ÿ‘

As far as I can tell, there shouldn't be anything blocking for getting this merged in. I'll approve running the CI and can take over the merging of the PR.

ErikKaum avatar Jul 15 '24 09:07 ErikKaum

https://github.com/huggingface/text-generation-inference/pull/2231

michael-conrad avatar Jul 16 '24 01:07 michael-conrad

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Aug 16 '24 01:08 github-actions[bot]

@ErikKaum I ran into this issue as I switched out API base urls and suddenly my script broke as the new API uses TGI which doesnt allow top_p=1.0. I can work around this but it would be nice if TGI would allow this as I dont see why it is not allowed.

cornzz avatar Sep 03 '24 13:09 cornzz

Hi @cornzz ๐Ÿ‘‹

I understand that it's annoying that it breaks the client. But I think still for now we're opting for a clear error VS discarding user input without letting the user know.

But if there's a lot of demand for the top_p=1.0 == no top_p alternative we're still open. One good way to get an "indication of demand" for that would be e.g. to get thumbs up on a issue that's a feature request.

Hopefully this makes sense to you ๐Ÿ‘

ErikKaum avatar Sep 03 '24 13:09 ErikKaum

Depending on the client software. It could result in a breakage that prevents any use of TGI by a customer.

-Michael Conrad Telephone: +1.678.934.3989 Email: @.*** Telegram: https://t.me/mconrad202 Mastodon: @.***

On Tue, Sep 3, 2024 at 9:47โ€ฏAM Erik Kaunismรคki @.***> wrote:

Hi @cornzz https://github.com/cornzz ๐Ÿ‘‹

I understand that it's annoying that it breaks the client. But I think still for now we're opting for a clear error VS discarding user input without letting the user know.

But if there's a lot of demand for the top_p=1.0 == no top_p alternative we're still open. One good way to get an "indication of demand" for that would be e.g. to get thumbs up on a issue that's a feature request.

Hopefully this makes sense to you ๐Ÿ‘

โ€” Reply to this email directly, view it on GitHub https://github.com/huggingface/text-generation-inference/issues/2222#issuecomment-2326574074, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABH4XNG7FICQDFNQVTEVASLZUW4WRAVCNFSM6AAAAABKXPZBUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRWGU3TIMBXGQ . You are receiving this because you commented.Message ID: @.***>

michael-conrad avatar Sep 03 '24 14:09 michael-conrad

Hey @ErikKaum, thanks for your quick response! Its not a problem, I was reusing a script and I am wondering why the authors set top_p at all since it defaults to 1.

Still, and sorry if I am misunderstanding something, but what do you mean by discarding user input? Maybe I am missing something, but why can't top_p be set to 1.0 by the user manually while not setting any value for top_p makes it default to 1.0?

cornzz avatar Sep 03 '24 15:09 cornzz

Glad to hear that it's not a problem ๐Ÿ‘

while not setting any value for top_p makes it default to 1.0?

No worries. So I'm pretty sure by not having a top_p it default to None and not 1:

This is where it get's processed as an Option<f32> in the router: https://github.com/huggingface/text-generation-inference/blob/6cb42f49ae47a117e8f1bdfcdb5cbe42332dc360/router/src/server.rs#L789

And then on the model logic side the branch is conditioned on top_p is not None: https://github.com/huggingface/text-generation-inference/blob/6cb42f49ae47a117e8f1bdfcdb5cbe42332dc360/server/text_generation_server/utils/logits_process.py#L26-L49

There might be something I missed here or misunderstood in your question.

ErikKaum avatar Sep 05 '24 09:09 ErikKaum

Ah okay, sorry then it was a misunderstanding on my side, I assumed from this comment above, that it defaults to 1.0 if it was None.

cornzz avatar Sep 05 '24 13:09 cornzz

i still dont understand why top_p cant be 1.0. Not only is it not OAI-Api compatible but it is really counterintuitive given the meaning of the param.

zhksh avatar Nov 04 '24 12:11 zhksh