[Small LLM] Max tokens fixed at 128?
The reference implementation appears to fix the maximum number of output tokens to 128.
Despite this, the reference scores:
{
'rouge1': 38.7792,
'rouge2': 15.9075,
'rougeL': 24.4957,
'rougeLsum': 35.793,
'gen_len': 8167644,
'gen_num': 13368,
}
imply that the average number of output tokens is ~611 (= gen_len / gen_num).
What's going on?
The gen_len here is not the tokens but the characters I believe.
Llama2-70b computes the gen_tok_len which is then used to compute the tokens per sample
Adding to this, here's a snapshot of mlperf_log_accuracy.json which we obtained by running the reference implementation -
{ "seq_id" : 0, "qsl_idx" : 2844, "data" : "33F80000540200000BB000004A2700003D1F0000C2020000582B0000D8E3000067EE0000B0010000BA23000085010000A52E0000540C0000170100004A0300003B01000017010000D00C00000D0000003D1F00000B000000DC000000090300000B000000C20200006F070000DC00000016000000D76A00007101000033F8000054020000A4250000DC0000009B060000710C0000A4010000D00C00000D000000030500003E020000DB2100002B02000033F80000F901000001EE00003911000030010000DC000000A70200001000000071010000AB1B00001100000015E800000D0000003D1F0000C202000003040000B71E000037010000080100009B0F0000831B00001354000006040000BC4B00000D00000003050000B2020000EA05000071010000D8E3000067EE00000B000000790300000F020000DC000000210500002F01000030010000AA230000042400000B000000C00200003B0100001701000086E900006F010000712800002B0200001B1700009B1A0000A90400000D0000003D1F0000C2020000A56900009A03000017010000F00D00000B000000B29300002D030000C96E00004301000037B400000D0000003D1F00003E02000008010000181700003001000033F8000054020000A4250000DC000000CC0300000C090000E36C0000500800001E060000D00C00000D00000003050000B2020000B1050000", "token_count" : 128 },
{ "seq_id" : 1, "qsl_idx" : 7863, "data" : "D002000020000000D51400000B000000C3ED0000A5D00000860C00007C090000D3D800000B000000C202000003040000E6060000064000003B010000A2290000C5020000080100000E7D0000540200003001000015290000F6D700000B0000007C3A00001A050000350400009BB200003E0200007308000025080000ECA10000952B000030010000DC000000A30400006F1000000D000000D3D800000B000000DC000000C60300000B0000002431000008020000FD0C00003E20000083060000300100003725000071010000170100005B930000ED0300003B01000008010000618F00000B0000004B3D00000801000065AC0000042900000B0000004301000042030000E3F7000008010000618F000073040000CB01000016630000F91200000D00000042020000B0290000EE27000085010000801D0000DC000000180000000B000000DC000000A7020000130000000B000000E2030000D3D80000B80F00003701000017010000161A0000241E0000B30A0000430100003E020000F3A8000037010000E1050000350400009BB2000025080000ECA100000B0000002223000008010000B60E00000D0C00003B010000080100009D070000ECA100009BB20000850100003504000024070000C20F00000D000000D3D8000006000000F1360000922E00001701000016180000B80F0000FE030000CC8200000B000000CF020000", "token_count" : 128 },
{ "seq_id" : 2, "qsl_idx" : 10880, "data" : "D002000017030000C4660100DA7300003B0100007B05000069DE0000991D00003001000017010000AF170000560000003A0300000D2C000089270000E72F000008010000699800003001000008010000312200007A380000950C00001701000089270000000B00000D000000DF280000541A00000B000000DC000000360A00000B0000004301000035040000DC00000092060000A312000012190000BA11000012360000570100000B000000B1080000F90100008F6B00000B0000001B040000991D0000E2030000170100000D2C000025880000730400001701000031220000306F01000D000000DF28000054020000742A0000C5BB0000D311000035010000430100002D030000F87800008F4400007D4D00000B00000007B100000B00000043010000571A00000802000017010000699800000D000000C5BB0000D3110000350100002F04000038020000760100001B0200004D4A00009A0300007C040000A51B000043010000392C0000533A000071010000170100002B1A00003B0100001701000038040000100900004F610000AF3400003B010000608D0000EEB50000170100000D2C00000D0000000305000018D7000037010000F50600003B0100001701000089270000B7010000CB0100009D2D000071010000B105000046060000B101000054020000F00800003F660000370100000B3C01002D030000DA730000", "token_count" : 128 },
{ "seq_id" : 3, "qsl_idx" : 10847, "data" : "D0020000944A010054020000D91A0000BF4700000B0000009F0400007C010000950900002A6C00000B000000C2020000D72B000008010000F60100005F0700003B010000A20B0000DAE20000AE0100001D0400008B0800004735000037010000102200003F070000FF0300000D000000420200005F0700000B000000860300001D040000830100002E4F00000C0100002B020000B6130000046100000B0000001D04000083010000080100006C0100008E1000005A9C000006000000E3150000AE0100001D0400009D070000C502000047350000F9010000A3FD000043010000001500007A3000003001000017010000644D0000800700000D000000420200009B6200001D040000EE0500004735000043010000DA070000A0400100C72900003701000010220000170100003A1400003B010000FF030000952200000B000000C1010000B61300000461000083E4000017010000800700000D0000002A6C0000D3670000AE01000017010000E60500005F070000C20200000304000008010000221D000071010000581400000B000000CF0200002F040000B10100003E020000921C0000370100006C010000CB030000B9010000FB0900002D0C00002A0300006A6B000006020000FF6C0000170100006C01000077100000060000003B010000710C01000D00000003050000201B000037010000EE050000CB01000001B30000", "token_count" : 128 },
{ "seq_id" : 4, "qsl_idx" : 11808, "data" : "C84D0000AD370000F54800001701000020220000E60E00003B0100002D030000FA010000E2F90000B32A000064010000A4020000F1120000DD3500002C01000054020000EA0500000B00000042020000946F01003B0100006353000099BE0000DE0100000B00000008020000170100007F1C0000160A0000061500000D00000064010000A4020000F11200000B000000DC0000005B0600000B0000004C19000017010000F70B0000E80D00003B01000045C40000220600001A1B00006D01000043010000C40F0000101B00006021000071010000350400003A1400000D000000C84D0000AD3700000B000000DC000000C60300000B0000006F07000008010000640D0000B9400000AC4F0000CF0200003E020000820F0000A97F00002D1B000017010000EA0500000D000000420200009C160000760100009E060000B7080000850100002B020000D64300003B010000E06500000B000000080100007F1C0000DC17000024870000780700006E4A0000F029000071010000170100004716000085010000AE3701008D5C00000D00000064010000A4020000F112000054020000663D0000A41B0000C202000087C6000017620000E509000035040000CB1A0000F9010000C84D0000AD3700001E060000240400000B0000004301000054050000C20200004D0400004C6A000008010000E80D00003001000017010000C4B20000", "token_count" : 128 },
...
...
token_count is 128 for each of the samples.
Accuracy:
{'rouge1': '38.8329', 'rouge2': '15.9667', 'rougeL': '24.5374', 'rougeLsum': '35.8886', 'gen_len': np.int64(8182758), 'gen_num': 13368}
The gen_len here is not the tokens but the characters I believe. Llama2-70b computes the
gen_tok_lenwhich is then used to compute the tokens per sample
Thanks @attafosu. For Llama2-70B, we calculate both gen_len and gen_tok_len e.g.:
{'rouge1': 44.7466, 'rouge2': 22.3524, 'rougeL': 29.1548, 'rougeLsum': 42.2693, 'gen_len': 26328995, 'gen_num': 24576, 'gen_tok_len': 6677253, 'tokens_per_sample': 271.7}
For Llama3.1-405B, too, e.g.:
{'exact_match': 90.12851091992057, 'rougeL': 21.93672502698354, 'gen_len': 23338435, 'gen_num': 8313, 'gen_tok_len': 5456327, 'tokens_per_sample': 656.4}
For the Small LLM, the token-to-character ratio is ~4.8; for Llama2-70B, it's ~2.5; for Llama3.1-405B, it's ~4.3. That's fine given the difference in vocabularies used.
Perhaps we should introduce gen_tok_len for the Small LLM too for consistency and to avoid confusion in the future.
The main question remains: Why is the maximum number of output tokens fixed at 128? From what we see, the model "wants" to "say" more in practically every case, but it's prevented from doing so. This is not very realistic for the summarization task.
Not to mention, that this removes one of the typical optimization challenges: both the input length and the output length being randomly distributed.
@psyhtest Valid point on the osl distribution. iirc one of the reasons was that without finetuning, the 8B was quite verbose, which is evident from most of the generated outputs being 128 tokens (in reality this should be varying, with some being lower, and of course others higher). But I think the actual decision of max 128 tokens was mistakenly borrowed from gpt-j (which is limited in max sequence length). We overlooked the fact of the ground truth output length distribution (attached below) Given that submission is very close, we may have to bring this discussion to the WG to see if we need some revision. From the summary, there's about 4% of ground truth lengths > 128
Ground truth output sequence length summary:
count 13368
mean 72.040171
std 32.064774
min 14
50% 67
90% 107
95% 123
96% 127
97% 133
99.9% 208.266000
max 1893.000000
Clarification: gen_len is CHARACTER count, not TOKEN count
I've analyzed the code and can clarify the confusion. There is no discrepancy - the model correctly respects the 128 token limit.
Key Finding
In language/llama3.1-8b/evaluation.py (lines 127-129):
prediction_lens = [len(pred) for pred in preds] # len() on STRING = characters
result["gen_len"] = np.sum(prediction_lens) # Sum of CHARACTER counts
result["gen_num"] = len(preds) # Number of samples
gen_len counts characters, not tokens. The len() function on a decoded string returns character count.
Math Verification
Your metrics:
- gen_len: 8,167,644
- gen_num: 13,368
- Average: 611 characters per sample
Token-to-character ratio for Llama models: ~4.8 chars/token
- 128 tokens × 4.8 = ~614 characters
Perfect match! The model generates ≤128 tokens, which decode to ~611 characters on average.
Code Confirmation
SUT_VLLM.py line 75:
"max_tokens": 128, # Hard limit enforced
Recommendation
To prevent future confusion, consider adding clarifying metrics:
result["gen_len_chars"] = np.sum(prediction_lens) # Explicit: characters
result["avg_chars_per_sample"] = result["gen_len_chars"] / result["gen_num"]
# Optional: estimate tokens (model-specific ratio)
result["est_avg_tokens_per_sample"] = result["avg_chars_per_sample"] / 4.8
The 128 token limit is working correctly. The confusion stems from metric naming - gen_len suggests tokens but actually counts characters.
From the summary, there's about 4% of ground truth lengths > 128
It looks that close to 100% of generated lengths is > 128, that is the model is much more chatty that the ground truth! Maybe the prompt should have been to create as a concise summary as possible. I think we should consider modifying it for the next round.
WG Meeting: Fix in 6.0.
Just to add on, initially during the taskforce we'd observed that without the 128 token limit we were getting much poorer accuracy despite the system prompt asking the model to be succinct
Perhaps this might be helpful?
If you haven't tried "threatening" LLMs in system prompts, then you should!
That LinkedIn post about "threatening" LLMs is hilarious but actually makes sense! 😄
Given that LLAMA3 is being so chatty (100% outputs >128 tokens vs 4% in ground truth), maybe we do need to get a bit more... assertive with our prompts.
Instead of politely asking "please be concise", something like:
- "Summary MUST be under 50 words. Exceeding this limit will result in immediate rejection."
- "You have a strict 3-sentence limit. Every word counts."
It's funny how models respond better to consequences than kindness - just like they picked up on internet drama during training!
@taran2210's observation that the hard 128-token cutoff actually improves accuracy supports this too. Maybe for v6.0, combine both approaches - stern prompt + token limit?
Hi @psyhtest One request that came up from the WG is whether you can generate some statistics of the generated output sequence lengths when the max output tokens is increased beyond 128. The goal is to see if there's going to be some variation in the output lengths or that it will skew towards the max tokens (the increased value) as seen in the case for 128.
@sahelib25 Can we try please with the reference with max tokens set to e.g. 256?
Hi, when running the dataset with 128, 256, 1K, and 2K max tokens, the model consistently generated outputs of exactly the maximum length with no variation. With 4K max tokens, we start seeing variations in the output lengths.
@psyhtest @attafosu — let us know if you'd prefer to have the discussion at the start of the WG meeting. I've seen both joined over the past two weeks, but Anton had to drop off due to the late hour in Europe. Hopefully we can resolve this offline, but if it's easier to talk during the first 15 minutes of the meeting, please let Miro and me know.
WG Meeting: @psyhtest to try out a few prompts and report