Saeid Ghafouri

Results 43 issues of Saeid Ghafouri

Follow up to [Slack channel question](https://seldondev.slack.com/archives/C03DQFTFXMX/p1659621260141289) with @adriangonz . As part of a project, I have a set of interconnected nodes (each node is a Triton server) in which each...

v2

## Describe the bug As described in the [Custom pre-processors with the V2 protocol](https://docs.seldon.io/projects/seldon-core/en/latest/examples/transformers-v2-protocol.html) notebook, the model is adapted from [Pretrained GPT2 Model Deployment Example](https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example.html) notebook. However, I tried to...

bug

## Describe the bug In prepackaged multi-model Triton servers the model metadata endpoints in V2 protocol only one of the deployed models endpoints is accessible under `v2/models/${MODEL_NAME}`. This is because...

bug
v2

## Describe the bug [Triton version policy](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#version-policy) is not implemented for access through Seldon core in Seldon prepackaged servers. [v2 protocal](https://kserve.github.io/website/modelserving/inference_api/) has this endpoint with `POST v2/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]/infer`, however, it seems...

bug
v2

Implementation of model management endpoints of Triton. This is partially supported by bypassing the engine with https://github.com/SeldonIO/seldon-core/pull/4216#issuecomment-1192918478 Native support with the Seldon engine would be helpful.

v2

## Describe the bug I want to point a follow up on https://github.com/SeldonIO/seldon-core/issues/4092 which could be a potential bug. It seems that the intermediate models metadata endpoints are not accessible...

bug
Stale

Hello Seldon team, As part of a special usecase I need to access my custom server from an endpoint. In other words, I want to add some custom logic to...

## Describe the bug I have to produce a simple three-node inference graph with some dummy logic in node just for testing purposes using python SDK and docker wrapper. This...

bug

**What would you like to be added**: Does kubedl also support optimising inference pipelines? (situations that we have a set of consecutive models in a sequential pipeline). It would be...

enhancement

I am having the same problem you posted here: https://github.com/openai/gym/issues/497 However, the difference is I want to use the stable-baseline library but it seems it doesn't accept user-defined space type...