lean_transformer
lean_transformer copied to clipboard
Inference / LeanGPT.generate
This is a master discussion for memory-efficient inferencing, further notes will be added shortly
Current quest stage: add a dummy cache that is passed to all attention layers
ideally, this should be available as a .generate method in LeanGPTForPreTraining https://github.com/learning-at-home/lean_transformer/blob/main/lean_transformer/models/gpt.py#L184