vllm
vllm copied to clipboard
[Core] BatchLLM for better shared prefix utilizing in offline scenarios
Please take a look at the RFC Batchllm for more details.
cc @WoosukKwon @comaniac for the next step.
👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can do one of these:
- Add
readylabel to the PR - Enable auto-merge.
🚀
just made some changes after pre-commit. For enable Batchllm, please enable --enable_ahead_of_prefix_clustering (just like enable_chunked_prefill)
Hello, I'm trying to run batchllm with this code(I have set enable_ahead_of_prefix_clustering manually ):
from vllm import LLM, SamplingParams
prefix = "Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body’s immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance.\nExplain the above in "
prompts = [
"Hello, my name is",
"The president of the United States is",
prefix + "one sentence:",
prefix + "two sentence:",
]
prompt_id = [
[2, 31414, 6, 127, 766, 16],
[2, 133, 394, 9, 5, 315, 532, 16],
[2, 18348, 1452, 34339, 32, 10, 1907, 9, 8456, 341, 7, 3951, 25738, 11341, 4, 252, 173, 30, 1169, 2429, 5, 9436, 50, 9107, 106, 31, 37209, 11162, 6, 2455, 5, 809, 17, 27, 29, 9161, 467, 7, 1032, 160, 5, 7910, 4, 3702, 1452, 34339, 32, 2333, 551, 43016, 11, 5, 1026, 9, 13866, 6, 34589, 6, 50, 6936, 2643, 6, 50, 2128, 16556, 38553, 9412, 4, 252, 32, 45, 2375, 136, 7696, 11341, 6, 8, 634, 106, 27281, 64, 483, 7, 25465, 5910, 4, 50118, 43043, 1851, 5, 1065, 11, 65, 3645, 35],
[2, 18348, 1452, 34339, 32, 10, 1907, 9, 8456, 341, 7, 3951, 25738, 11341, 4, 252, 173, 30, 1169, 2429, 5, 9436, 50, 9107, 106, 31, 37209, 11162, 6, 2455, 5, 809, 17, 27, 29, 9161, 467, 7, 1032, 160, 5, 7910, 4, 3702, 1452, 34339, 32, 2333, 551, 43016, 11, 5, 1026, 9, 13866, 6, 34589, 6, 50, 6936, 2643, 6, 50, 2128, 16556, 38553, 9412, 4, 252, 32, 45, 2375, 136, 7696, 11341, 6, 8, 634, 106, 27281, 64, 483, 7, 25465, 5910, 4, 50118, 43043, 1851, 5, 1065, 11, 80, 3645, 35],
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, stop=",")
llm = LLM(
model="facebook/opt-1.3b",
gpu_memory_utilization=0.15,
max_num_seqs=2,
)
# outputs = llm.generate(prompts, sampling_params)
outputs = llm.generate(
prompt_token_ids=prompt_id, sampling_params=sampling_params
)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
print(output)
but I got this error:
It sames prefix's kvcache(block_table[prefix_seq]) is freed when infering non_prefix_seq(the latter half of the third and forth prompt). BTW, vllm/inputs/prefix_clustering.py line 240: if prefix is None: => if prefix is None or prefix == []:
Hello, I'm trying to run batchllm with this code(I have set enable_ahead_of_prefix_clustering manually ):
from vllm import LLM, SamplingParams prefix = "Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body’s immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance.\nExplain the above in " prompts = [ "Hello, my name is", "The president of the United States is", prefix + "one sentence:", prefix + "two sentence:", ] prompt_id = [ [2, 31414, 6, 127, 766, 16], [2, 133, 394, 9, 5, 315, 532, 16], [2, 18348, 1452, 34339, 32, 10, 1907, 9, 8456, 341, 7, 3951, 25738, 11341, 4, 252, 173, 30, 1169, 2429, 5, 9436, 50, 9107, 106, 31, 37209, 11162, 6, 2455, 5, 809, 17, 27, 29, 9161, 467, 7, 1032, 160, 5, 7910, 4, 3702, 1452, 34339, 32, 2333, 551, 43016, 11, 5, 1026, 9, 13866, 6, 34589, 6, 50, 6936, 2643, 6, 50, 2128, 16556, 38553, 9412, 4, 252, 32, 45, 2375, 136, 7696, 11341, 6, 8, 634, 106, 27281, 64, 483, 7, 25465, 5910, 4, 50118, 43043, 1851, 5, 1065, 11, 65, 3645, 35], [2, 18348, 1452, 34339, 32, 10, 1907, 9, 8456, 341, 7, 3951, 25738, 11341, 4, 252, 173, 30, 1169, 2429, 5, 9436, 50, 9107, 106, 31, 37209, 11162, 6, 2455, 5, 809, 17, 27, 29, 9161, 467, 7, 1032, 160, 5, 7910, 4, 3702, 1452, 34339, 32, 2333, 551, 43016, 11, 5, 1026, 9, 13866, 6, 34589, 6, 50, 6936, 2643, 6, 50, 2128, 16556, 38553, 9412, 4, 252, 32, 45, 2375, 136, 7696, 11341, 6, 8, 634, 106, 27281, 64, 483, 7, 25465, 5910, 4, 50118, 43043, 1851, 5, 1065, 11, 80, 3645, 35], ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95, stop=",") llm = LLM( model="facebook/opt-1.3b", gpu_memory_utilization=0.15, max_num_seqs=2, ) # outputs = llm.generate(prompts, sampling_params) outputs = llm.generate( prompt_token_ids=prompt_id, sampling_params=sampling_params ) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") print(output)but I got this error:
It sames prefix's kvcache(block_table[prefix_seq]) is freed when infering non_prefix_seq(the latter half of the third and forth prompt). BTW, vllm/inputs/prefix_clustering.py line 240: if prefix is None: => if prefix is None or prefix == []:
I found free_seq is called twice when seq is finished and I remove last lines in vllm/engine/output_processor/single_step.py function _process_sequence_group_outputs:
But I got a new error ==
@zhaocaibei123 Thanks for your reporting! we'll take a look
cc @fangtaosong
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @xinji1.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!
This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you!
