BentoML
BentoML copied to clipboard
feat(build): expose `pip_preheat_packages`
With the addition of #4690, added a field under docker.pip_preheat_packages
under bentofile.yaml allowing user to specifying a list of dependencies to be preheated within the cache layers for improvement of build time
docker:
pip_preheat_packages:
- vllm==0.4.2
- lmdeploy
Would be useful for openllm as openllm will lock specific vllm version.
@bojiang any comments on this?
This would be a great addition for us, another issue we have with pre-heating is that it does not play nice with pip markers, eg.
torch==2.3.0 ; platform_machine!='x86_64'
torch==2.3.0 --index-url https://download.pytorch.org/whl/cpu ; platform_machine=='x86_64'
..or with a specific wheel for a given target platform, the syntax throws the direct pip install {} || true
off.
Would it be a possibility to generate an intermediary requirements file with the exact same definition found in the original, and pip install -r
that instead?
Alternatively, having the option of deactivating pre-heating entirely would alleviate the issue somewhat.
@eledhwen The issue should be fixed: https://github.com/bentoml/BentoML/pull/4737 Thanks for contribution.
What are the packages you wanna cover if we supported custom pip_preheat_packages ?
@bojiang this is different right? #4737 doesn't support custom packages, rather just finding torch or vllm from our predefined packages right?
I guess we do parse the python packages for vllm and torch. But then it will be pretty hard for us to manage additional packages in the future. Should we let users to have control over this?