torchtune icon indicating copy to clipboard operation
torchtune copied to clipboard

Update README table

Open rohan-varma opened this issue 1 year ago • 1 comments

Context

Can't reproduce 24.1GB on a single device full finetune and instead seeing 20.2 peak memory allocated (and similar for reserved), so updating this table.

To reproduce, I ensured all inputs are seq_len=2048 -

input_ids = torch.zeros((input_ids.shape[0], 2048), dtype=torch.long)
labels = torch.zeros((labels.shape[0], 2048), dtype=torch.long)

And ran with batch_size=4 -

tune run full_finetune_single_device --config recipes/configs/llama2/7B_full_low_memory.yaml batch_size=4

rohan-varma avatar May 10 '24 20:05 rohan-varma

:link: Helpful Links

:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/961

Note: Links to docs will display an error until the docs builds have been completed.

:white_check_mark: No Failures

As of commit 6f46cc414d5604c879a36288092cd607911f54cc with merge base 30c75d4a735af31391a1a0ceb529b63936bcb134 (image): :green_heart: Looks good so far! There are no failures yet. :green_heart:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

pytorch-bot[bot] avatar May 10 '24 20:05 pytorch-bot[bot]

I think we need to get new numbers for this in general, probably can do this through an automated process. Closing this for now.

joecummings avatar Jul 31 '24 14:07 joecummings