litgpt icon indicating copy to clipboard operation
litgpt copied to clipboard

Create a table of results for our supported checkpoints

Open carmocca opened this issue 2 years ago • 0 comments

We support a large number of checkpoints. And there's a multitude of scripts that can be run.

Users often ask questions like "can I run X script with Y model given Z memory?" or "is X (script, model) faster than Y (script, model)?"

The idea would be to collect data in a Markdown table that we can point to answer these questions.

The data should always be collected from the same machine (our 8xA100 node). Some scripts will have to specify the hparams used. We can pick out a subset of the checkpoints to start with.

For example:

generate/base.py --precision bf16-true

Model tokens/sec Memory (GB)
pythia-6.9b ... ...
falcon-7b ... ...
stablelm-base-alpha-7b ... ...

carmocca avatar Jun 14 '23 02:06 carmocca