BentoML
                                
                                 BentoML copied to clipboard
                                
                                    BentoML copied to clipboard
                            
                            
                            
                        The easiest way to serve AI apps and models - Build reliable Inference APIs, LLM apps, Multi-model chains, RAG service, and much more!
### Feature request The default scheduling strategy implementation schedules the same number of runner (`nvidia.com/gpu` supported) instances as the number of available GPUs. If multiple types of runners are present...
### Feature request It would be great to add native support for DICOM inputs, there are a lot of ML applications in medical imaging nowadays. Now you would need to...
### Feature request External modules are current not by default pickled with the model. ### Motivation _No response_ ### Other _No response_
Currently there aren't a way to fully support torch hub. https://github.com/bentoml/BentoML/issues/2602 like this often comes up due to different torch hub imports implementation. *Proposal* Provides a `bentoml.torchhub` that create interaction...
Relevant discussions in https://github.com/bentoml/BentoML/issues/666
- Health checking (https://github.com/bentoml/BentoML/issues/2630) - Start/stop hooks
[`darts`](https://unit8co.github.io/darts/) is a great library for time series prediction. It would be great if bentoml supports the darts library. Note: darts wraps a bunch of existing (time series) libraries such...
Adding support for using models trained with Pytorch ignite in BentoML * sample notebook showing how the integration could work * verify that the current `bentoml.pytorch` module can adapt to...