llama-gpt icon indicating copy to clipboard operation
llama-gpt copied to clipboard

Max token input window & completion output

Open ganymedenet opened this issue 1 year ago • 3 comments

Good day,

Model: 7B

  • What is the maximum token input window?
  • How can I set or limit the number of completion tokens?

No matter how many prompt_tokens I supply, I always receive 16 completion tokens back.

   "usage": {
      "prompt_tokens": 191,
      "completion_tokens": 16,
      "total_tokens": 207
   }

ganymedenet avatar Aug 24 '23 20:08 ganymedenet

Based on what I've tried, at least with my server in a Docker container, the maximum token I can get is 100. In order to change the total_tokens, you need to add a "max_token" parameter to your request.

https://platform.openai.com/docs/api-reference/chat/create#max_tokens

Anoerak avatar Aug 27 '23 15:08 Anoerak

Thank you!

Based on what I've tried, at least with my server in a Docker container, the maximum token I can get is 100. In order to change the total_tokens, you need to add a "max_token" parameter to your request.

https://platform.openai.com/docs/api-reference/chat/create#max_tokens

Thank you! It works.. image

ganymedenet avatar Aug 29 '23 21:08 ganymedenet

Nice :) You can close the issue I guess.

Anoerak avatar Aug 29 '23 22:08 Anoerak