lishuang

Results 7 issues of lishuang

```shell File "/home/lishuang/Disk/gitlab/traincode/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py", line 240, in forward raise NotImplementedError() NotImplementedError ``` h =1 target_h =1 w =2 target_w =1 How to deal with it?

``` UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result...

**Description** I was unable to build the onnxruntime_backend with OpenVino for Triton Inference Server r22.03 using compatible ONNXRuntime and tensorrt versions (from Triton Inference Server compatibility matrix). **Triton Information** r22.03...

TensorRT-LLM Backend I have built via docker. But the size of docker image is too big than the image in NGC. How to decrease the size? ![捕获](https://github.com/triton-inference-server/tensorrtllm_backend/assets/26588466/50dbe936-7551-4831-ab07-07faa541d66b) this is the...

triaged

### System Info / 系統信息 registry.cn-hangzhou.aliyuncs.com/xprobe_xinference/xinference:v0.14.0.post1镜像 ### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece? - [X] docker / docker - [ ] pip install / 通过 pip install...

通过容器启动supervisor ```shell docker run -v ./xinfer_supervisor:/tmp/xinference --name xinfer_supervisor \ -e XINFERENCE_HOME=/tmp/xinference -e XINFERENCE_MODEL_SRC=modelscope \ -p 9997:9997 -p 9996:9996 xprobe/xinference:v0.13.2 \ xinference-supervisor -H 0.0.0.0 -p 9997 --supervisor-port 9996 --log-level debug ```...

gpu
stale

### System Info x86_64 755G nvidia T4 ubuntu 22.04 trtllm version : https://github.com/NVIDIA/TensorRT-LLM/archive/9691e12bce7ae1c126c435a049eb516eb119486c.zip pip install tensorrt-llm==0.11.0.dev2024062500 --extra-index-url https://pypi.nvidia.com ### Who can help? @Tracin ### Information - [X] The official example...

bug
Investigating