ctransformers icon indicating copy to clipboard operation
ctransformers copied to clipboard

How to handle the token limitation for a LLM response?

Open phoenixthinker opened this issue 2 years ago • 2 comments

Hi,

When the LLM generate a long answer that exceeded 512 tokens, the program start to show warning message like this: WARNING:ctransformers:Number of tokens (513) exceeded maximum context length (512) WARNING:ctransformers:Number of tokens (514) exceeded maximum context length (512) ... ........

In my use case, for example, using a 7b LLM for Q&A application. The LLM response (the generated answer) is always longer then 512 tokens. Can anyone suggest a solution or show some simple code to handle this problem? Thanks.

phoenixthinker avatar Oct 16 '23 06:10 phoenixthinker

config = {'max_new_tokens': 2048, 'context_length': 8192, # <------ Solved by adding this line 'repetition_penalty': 1.1, 'temperature': 0.1, 'top_k': 50, 'top_p': 0.9, 'stream': True, # streaming per word/token 'threads': int(os.cpu_count() / 2), # adjust for your CPU }

phoenixthinker avatar Oct 16 '23 09:10 phoenixthinker

config = {'max_new_tokens': 2048, 'context_length': 8192, # <------ Solved by adding this line 'repetition_penalty': 1.1, 'temperature': 0.1, 'top_k': 50, 'top_p': 0.9, 'stream': True, # streaming per word/token 'threads': int(os.cpu_count() / 2), # adjust for your CPU }

Did you modify it like this?

model = AutoModelForCausalLM.from_pretrained("TheBloke/openchat_3.5-GGUF", model_file="openchat_3.5.Q5_K_M.gguf", model_type="mistral", gpu_layers=0, max_new_tokens=1024, context_length= 8192)

how do you passs this args?

Thanks

hdnh2006 avatar Dec 22 '23 10:12 hdnh2006