FastChat
FastChat copied to clipboard
Should the inference code be improved by a sliding window?
trafficstars
Now prompt is cut to max_src_len, so prompt + new tokens is less than context_len. Then a lot prompt would be excluded, like system prompt.
Could it use a sliding window, rolling over the prompt, from the beginning, and always keep past_key_values for context_len - 1 input_ids if needed. Then more momery could be attended?
def generate_stream(model, tokenizer, params, device,
context_len=2048, stream_interval=2):
prompt = params["prompt"]
l_prompt = len(prompt)
temperature = float(params.get("temperature", 1.0))
max_new_tokens = int(params.get("max_new_tokens", 256))
stop_str = params.get("stop", None)
if stop_str == tokenizer.eos_token:
stop_str = None
input_ids = tokenizer(prompt).input_ids
output_ids = list(input_ids)
max_src_len = context_len - max_new_tokens - 8
input_ids = input_ids[-max_src_len:]
for i in range(max_new_tokens):
if i == 0:
out = model(
torch.as_tensor([input_ids], device=device), use_cache=True)
logits = out.logits
past_key_values = out.past_key_values
else:
out = model(input_ids=torch.as_tensor([[token]], device=device),
use_cache=True,
past_key_values=past_key_values)
logits = out.logits
past_key_values = out.past_key_values
@qZhang88 : Yes, your suggestion is reasonable and you can. Contributions are welcome.