seldon-core
seldon-core copied to clipboard
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
Hi @cliveseldon @RafalSkolasinski Does seldon allow to connect two seldon deployments in gke as a pipeline instead of having a seldon inference graph connecting two components in a single deployment.
In case of ambassador, the deployed model endpoints can be easily customized with URI path re-write option with the below annotation - **_seldon.io/ambassador-config_** Example SeldonDeployment: apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata:...
Hi In my inference graph, I have two different tensorflow models to handle the same image and the results of them will be combined as the final outputs. Because using...
## Describe the bug ``` apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: gsrtestmodel namespace: seldon-system spec: name: gsrtestmodel predictors: - componentSpecs: - spec: containers: - name: gsrtestmodel image: gsr/gsrtestmodel:9.1 graph: name:...
## Describe the bug We have a problem with the usage of Seldon Operator version 1.13.1. Operator was installed to be used with Istio, that in our cluster is at...
I'd like to suggest seldon-core to expose metrics regarding the HTTP return codes for incoming requests to the executor, similarly to the ones exposed when using prometheus_flask_exporter for flask projects(histogram...
Is Seldon Batch Processor compatible with V2 models (implemented via MLServer)? I've found no evidence from the documentation, yet, the input/output format looks V2 uncompliant.
Currently helm values of Seldon Core allows to set memory/cpu limits and requests for `executor`, `manager` and `storageInitializer` containers. We should also add option to set this for model containers.