BentoML icon indicating copy to clipboard operation
BentoML copied to clipboard

The easiest way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Multi-model Inference Graph/Pipelines, LLM/RAG apps, and more!

Results 258 BentoML issues
Sort by recently updated
recently updated
newest added

### Describe the bug the doc claims that --model-id can either be a model id or a path to the local model, however bento will always check that the provided...

bug

### Describe the bug Calling a zero-argument runner method (using either `method.run()` or `method.async_run()`) results in an error like the following: ```python 2023-12-21T11:19:12-0500 [ERROR] [runner:custom_runner:9] Traceback (most recent call last):...

bug

### Describe the bug ```import torch from transformers import pipeline import bentoml pipe = pipeline( "automatic-speech-recognition", ) bentoml.transformers.save_model( "automatic-speech-recognition-whiser-large-v2", pipe, signatures={ "__call__": {"batchable": False} # Enable dynamic batching for model...

bug

### Describe the bug When making a bento API request with a large payload (e.g. a JSON key-value dict with ~200 elements), I get the following error: ``` aiohttp.client_exceptions.ClientResponseError: 400,...

bug

### Describe the bug gensim word2vec model ```python import os import mlflow from gensim.models.word2vec import Word2Vec class ScappyWrapper(mlflow.pyfunc.PythonModel): def load_context(self, context): file_path = os.path.join(context.artifacts["model_path"], "scappy_base.bin") self.model = Word2Vec.load(file_path) def predict(self,...

bug

### Feature request It would be great to propose an easy way to mount config files like .netrc before installing deps from a requirements.txt file ### Motivation I am not...

enhancement

https://github.com/bentoml/BentoML/blob/cc765bba83501f446297de31fdc819cd7dcc2901/pyproject.toml#L40C23-L40C23 To be precise, build times have increased since the `pynvml

### Describe the bug We installed BentoML 0.13.1 as we cannot migrate to 1.x at this moment, the installation is throwing the following error when we are trying to run...

bug

### Describe the bug benotml serve {my_service}.py:svc --port 3001 is loaded and stdout shows log ```bash Prometheus metrics for HTTP BentoServer from "{my_service}.py:svc" can be accessed at http://localhost:3001/metrics 2023-04-25T10:49:47+0900 [INFO]...

bug

### Describe the bug I am not able to import a secret as documented in "Advanced Containerization" : https://docs.bentoml.org/en/latest/guides/containerization.html#mount-credentials-from-host ### To reproduce ``` ~/gg-workspace/ml-detector/bento_config (mromagne/MLE-91/load-test-ec2*) » cat ../bentofile.yaml service: "services.py:svc"...

bug