vllm
vllm copied to clipboard
[V1][Metrics] Add API for accessing in-memory Prometheus metrics
The V0 LLM offline inference API exposes per-request metrics via RequestOutput.RequestMetrics. In V1, so far we have chosen to not track per-request metrics or implement this API.
All recent work implementing EAGLE have been using examples/offline_inference/eagle.py which depends on these metrics to report an aggregated mean acceptance length number.
See e.g. the EAGLE3 PR #16937 which used a WIP implementation of the per-request metrics #16367
The proposal in this PR is to achieve the same aggregated view by using the Prometheus metrics already implemented for the online serving case. This will mean we automatically gain new spec decoding metrics in #16665 for both offline and online inferencing
This does not preclude us from implementing per-request metrics in future in V1 if that proves to be important.
See also the spec decoding metrics design doc.
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @markmc.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
@markmc Can you please provide an example of how to compute acceptance length from the retrieved metrics in examples/offline_inference/eagle.py? is it just
acceptance_length = 1 + (
metrics.get_value('vllm:spec_decode_num_accepted_tokens') /
metrics.get_value('vllm:spec_decode_num_drafts')
)
additionally, I'm wondering why vllm:spec_decode_num_accepted_tokens_per_pos is a counter instead of a vector? how is it defined?
thanks again for the PR!
@markmc Can you please provide an example of how to compute acceptance length from the retrieved metrics in
examples/offline_inference/eagle.py? is it justacceptance_length = 1 + ( metrics.get_value('vllm:spec_decode_num_accepted_tokens') / metrics.get_value('vllm:spec_decode_num_drafts') )additionally, I'm wondering why
vllm:spec_decode_num_accepted_tokens_per_posis a counter instead of a vector? how is it defined?thanks again for the PR!
All good questions, @luyuzhe111. In fact, I had already rebased onto main, tried to use num_accepted_tokes_per_pos via the API and saw this deficiency!
Try the new version. I've adopted your suggestion of adding a Vector abstraction as well :+1:
Hi @markmc, appreciate the fast turn-around! the new version works like a charm. the only request is to add plus 1 to the mean acceptance length since one token will always be accepted. so mean acceptance length is essentially "average number of tokens generated per forward pass". cc @LiuXiaoxuanPKU
the only request is to add plus 1 to the mean acceptance length since one token will always be accepted. so mean acceptance length is essentially "average number of tokens generated per forward pass".
I don't think of the bonus/recovered token as "accepted", particularly in the context of the acceptance rate calculation - the proportion of drafts (speculated tokens) that are accepted
let's take the example from here
num_spec_tokens = 3
drafts:
- #1: 3 accepted
- #2: 1 accepted
- #3: 2 accepted
- #4: 2 accepted
- #5: 1 accepted
observe:
- num_drafts = 5
- num_draft_tokens = 15
- num_accepted_tokens = 9
- accepted_tokens_per_pos = [5, 3, 1]
compute:
- acceptance_rate = 9/15 = 0.6
- mean_acceptance_length = 1.8
- acceptance_probs_per_pos = [1.0, 0.6, 0.2]
You want:
compute:
- acceptance_rate = 9/15 = 0.6
- mean_acceptance_length = 2.8
- acceptance_probs_per_pos = [1.0, 1.0, 0.6, 0.2]
Why? Got any references to show this being common practice? Thanks.
Hi @markmc, as far as I know, all speculative decoding literatures reporting acceptance length includes the bonus token since this quantity aligns with "number of tokens generated per forward pass". All EAGLE papers (EAGLE-1, EAGLE-2, EAGLE-3)'s reported mean acceptance lengths include the +1 bonus token.
alternatively, maybe it's worth keeping the acceptance rate metrics as is but adding another metric for "number of tokens generated per forward pass"?
originally I was actually only suggesting in the examples/offline_inference/eagle.py we do
print(f"mean acceptance length: {1 + num_accepted / num_drafts:.2f}")
instead of
print(f"mean acceptance length: {num_accepted / num_drafts:.2f}")
since we have been also reporting the former quantity in various places such as here.
Hi @markmc, as far as I know, all speculative decoding literatures reporting acceptance length includes the bonus token since this quantity aligns with "number of tokens generated per forward pass".
Ok, see #17908. Thanks!
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @markmc.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
I've pushed an update that I'm not super happy with
To handle the case of DP where we have multiple sets of metrics identified by engine_idx, I've had to do some nasty consolidation of Histogram and Vector data based on label sets. This will also allow us to expand in future by adding other labels.
@markmc Is this PR waiting for review? Or is it in progress?
@markmc Is this PR waiting for review? Or is it in progress?
It is waiting for review
LGTM, @markmc could you just double check if the CI failure is related so that we can merge this PR?
LGTM, @markmc could you just double check if the CI failure is related so that we can merge this PR?
Yes, AFAICT all of these failures are happening on other PRs too
@markmc Can you please merge from main again?
@markmc Can you please merge from main again?
Done. I don't think the rebase resolves any of the test failures, but I could be wrong
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @markmc.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
Ok, the docs failure was a genuine - but hard-to-spot - issue with the PR
vllm/docs/source/serving/engine_args.md:14: ERROR: Failed to import "_engine_args_parser" from "vllm.engine.arg_utils".
No module named 'prometheus_client'