minGPT icon indicating copy to clipboard operation
minGPT copied to clipboard

Caching for generation

Open murbard opened this issue 3 years ago • 1 comments

Currently, generation is done by recomputing every activation after a token is added to the prompt. Normally, one would want to cache the intermediate activations to avoid recomputing them every time. It doesn't compose as well with using the forward function, but that's precisely why a clean and simple implementation should be a part of minGPT. It's very surprising that this is not afforded by pytorch's native TransformerEncoder module either.

murbard avatar Dec 27 '22 21:12 murbard

agree, a good todo item

karpathy avatar Dec 27 '22 22:12 karpathy