inference
inference copied to clipboard
BUG: Could not start v0.11.0 from Docker Compose
我也是最新版的镜像起不来
me too.+1
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0
RUN pip install torchvision==0.17.1
And then test it?
In my own machine with two GPUs, I can use xinference it normally with above method.
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it?
In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
What's the error here? You can also add --log-level debug
in your entrypoint command.
Could you just test:
- build the new image
- run it
docker run -p 9997:9997 --gpus all <the new image> xinference-local --log-level debug -H 0.0.0.0
- And then use it
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
![]()
![]()
What's the error here? You can also add
--log-level debug
in your entrypoint command. Could you just test:
- build the new image
- run it
docker run -p 9997:9997 --gpus all <the new image> xinference-local --log-level debug -H 0.0.0.0
- And then use it
is this message useful?
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
![]()
![]()
What's the error here? You can also add
--log-level debug
in your entrypoint command. Could you just test:
- build the new image
- run it
docker run -p 9997:9997 --gpus all <the new image> xinference-local --log-level debug -H 0.0.0.0
- And then use it
is this message useful?
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
![]()
![]()
What's the error here? You can also add
--log-level debug
in your entrypoint command. Could you just test:
- build the new image
- run it
docker run -p 9997:9997 --gpus all <the new image> xinference-local --log-level debug -H 0.0.0.0
- And then use it
is this message useful?
What's this? It seems no relation with xinference and it may be the issue with your cuda environment. The docker image uses pytorch image as the base image. You can try that whether you can use this image directly:
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
cause by adding --log-level, mayby it's my wrong usage
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
![]()
![]()
What's the error here? You can also add
--log-level debug
in your entrypoint command. Could you just test:
- build the new image
- run it
docker run -p 9997:9997 --gpus all <the new image> xinference-local --log-level debug -H 0.0.0.0
- And then use it
is this message useful?
What's this? It seems no relation with xinference and it may be the issue with your cuda environment. The docker image uses pytorch image as the base image. You can try that whether you can use this image directly:
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
cause by adding --log-level, mayby it's my wrong usage
nothing new shown, and auto shut down as well
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
Just run
docker run --gpus all pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel tail -f /dev/null
still auto shutdown?
@Minamiyama @yanmao2023 @XiaoCC Could you please help me test this: Build a new image based on our offical image:
FROM xprobe/xinference:v0.11.0 RUN pip install torchvision==0.17.1
And then test it? In my own machine with two GPUs, I can use xinference it normally with above method.
It seems not work for me
Paste your error stack. And your related commands.
![]()
![]()
What's the error here? You can also add
--log-level debug
in your entrypoint command. Could you just test:
- build the new image
- run it
docker run -p 9997:9997 --gpus all <the new image> xinference-local --log-level debug -H 0.0.0.0
- And then use it
is this message useful?
What's this? It seems no relation with xinference and it may be the issue with your cuda environment. The docker image uses pytorch image as the base image. You can try that whether you can use this image directly:
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
cause by adding --log-level, mayby it's my wrong usage
nothing new shown, and auto shut down as well
The host machine is windows OS. May cannot use 0.0.0.0. I haven't tried windows. Remove -H 0.0.0.0 and try again.
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
Just run
docker run --gpus all pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel tail -f /dev/null
still auto shutdown?
running normally
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
Just run
docker run --gpus all pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel tail -f /dev/null
still auto shutdown?
running normally
Cannot reproduce.
docker pull xprobe/xinference:nightly-bug_torchvision_version
This image is built by #1485 . And I can use it normally on my ubuntu machine.
pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel
Just run
docker run --gpus all pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel tail -f /dev/null
still auto shutdown?
running normally
Cannot reproduce.
docker pull xprobe/xinference:nightly-bug_torchvision_version
This image is built by #1485 . And I can use it normally on my ubuntu machine.
failed as well
@Minamiyama Try this image:
docker pull xprobe/xinference:nightly-docker_crash_due_to_llama
0.11.1 is ok