BentoML icon indicating copy to clipboard operation
BentoML copied to clipboard

feat(build): expose `pip_preheat_packages`

Open aarnphm opened this issue 9 months ago • 5 comments

With the addition of #4690, added a field under docker.pip_preheat_packages under bentofile.yaml allowing user to specifying a list of dependencies to be preheated within the cache layers for improvement of build time

docker:
  pip_preheat_packages:
    - vllm==0.4.2
    - lmdeploy

Would be useful for openllm as openllm will lock specific vllm version.

aarnphm avatar May 09 '24 22:05 aarnphm

@bojiang any comments on this?

aarnphm avatar May 14 '24 02:05 aarnphm

This would be a great addition for us, another issue we have with pre-heating is that it does not play nice with pip markers, eg.

torch==2.3.0 ; platform_machine!='x86_64'
torch==2.3.0 --index-url https://download.pytorch.org/whl/cpu ; platform_machine=='x86_64'

..or with a specific wheel for a given target platform, the syntax throws the direct pip install {} || true off.

Would it be a possibility to generate an intermediary requirements file with the exact same definition found in the original, and pip install -r that instead?

Alternatively, having the option of deactivating pre-heating entirely would alleviate the issue somewhat.

eledhwen avatar May 15 '24 16:05 eledhwen

@eledhwen The issue should be fixed: https://github.com/bentoml/BentoML/pull/4737 Thanks for contribution.

bojiang avatar May 16 '24 09:05 bojiang

What are the packages you wanna cover if we supported custom pip_preheat_packages ?

bojiang avatar May 16 '24 09:05 bojiang

@bojiang this is different right? #4737 doesn't support custom packages, rather just finding torch or vllm from our predefined packages right?

I guess we do parse the python packages for vllm and torch. But then it will be pretty hard for us to manage additional packages in the future. Should we let users to have control over this?

aarnphm avatar May 16 '24 21:05 aarnphm