Update README table
Context
Can't reproduce 24.1GB on a single device full finetune and instead seeing 20.2 peak memory allocated (and similar for reserved), so updating this table.
To reproduce, I ensured all inputs are seq_len=2048 -
input_ids = torch.zeros((input_ids.shape[0], 2048), dtype=torch.long)
labels = torch.zeros((labels.shape[0], 2048), dtype=torch.long)
And ran with batch_size=4 -
tune run full_finetune_single_device --config recipes/configs/llama2/7B_full_low_memory.yaml batch_size=4
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/961
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
:white_check_mark: No Failures
As of commit 6f46cc414d5604c879a36288092cd607911f54cc with merge base 30c75d4a735af31391a1a0ceb529b63936bcb134 ():
:green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
I think we need to get new numbers for this in general, probably can do this through an automated process. Closing this for now.