segment-anything icon indicating copy to clipboard operation
segment-anything copied to clipboard

runing error

Open JingJieMa opened this issue 1 year ago • 8 comments

python3 scripts/amg.py --checkpoint ./sam_vit_l_0b3195.pth --input ./input_image/dog.jpg --output ./output_image When I ran this statement, the following error occurred image

JingJieMa avatar Apr 09 '23 10:04 JingJieMa

Does the problem still exist when you download and try a different model? sam_vit_h_4b8939.pth for example?

FullStackSimon avatar Apr 09 '23 13:04 FullStackSimon

It's still the same error image

JingJieMa avatar Apr 09 '23 14:04 JingJieMa

Is there a problem with my runtime environment?

JingJieMa avatar Apr 09 '23 14:04 JingJieMa

I'm not sure I'm afraid.

The only thing I can see from your command is that you are specifying a filename instead of a folder for your output.

As per the amg.py script: Path to the directory where masks will be output. Output will be either a folder " "of PNGs per image or a single json with COCO-style masks

However I doubt that is the cause of the issue you are experiencing.

If you think its your runtime, perhaps try docker?

This is what works for me...

Dockerfile

# Use the official Python base image
FROM python:3.9

# Set the working directory
WORKDIR /app

# Copy the requirements file into the container
COPY requirements.txt .

RUN apt-get update && \
    apt-get install -y --no-install-recommends libgl1-mesa-glx && \
    rm -rf /var/lib/apt/lists/*

# Install the required Python packages
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the repository code into the container
COPY . .

# Install the 'segment-anything' package
RUN pip install -e .

# Set the entrypoint to a shell to allow user interaction
ENTRYPOINT ["/bin/bash"]

requirements.txt

torch
torchvision
timm
einops
matplotlib
opencv-python
pycocotools
svgwrite
numpy
svgpathtools

FullStackSimon avatar Apr 09 '23 14:04 FullStackSimon

P.s. - I just use the default model type with that model - I notice you specified a model type in your last screenshot. Remove that and see what happens?

This is what works for me

python scripts/amg.py --checkpoint data/sam_vit_h_4b8939.pth --input data/a-room-at-the-beach.jpeg --output data/output

FullStackSimon avatar Apr 09 '23 14:04 FullStackSimon

Please ensure that you provide the right model-type. In the first screenshot, you were using vit_l without specifying the model-type (the default is vit_h). In the second screenshot, you were using the checkpoint of vit_h but specifying the model-type as vit_b.

HannaMao avatar Apr 09 '23 19:04 HannaMao

Thank you. I tried as you suggested, and it prompted me that there is an issue with the NVIDIA driver. I will try to install the driver. image

JingJieMa avatar Apr 10 '23 01:04 JingJieMa

I'm not sure I'm afraid.

The only thing I can see from your command is that you are specifying a filename instead of a folder for your output.

As per the amg.py script: Path to the directory where masks will be output. Output will be either a folder " "of PNGs per image or a single json with COCO-style masks

However I doubt that is the cause of the issue you are experiencing.

If you think its your runtime, perhaps try docker?

This is what works for me...

Dockerfile

# Use the official Python base image
FROM python:3.9

# Set the working directory
WORKDIR /app

# Copy the requirements file into the container
COPY requirements.txt .

RUN apt-get update && \
    apt-get install -y --no-install-recommends libgl1-mesa-glx && \
    rm -rf /var/lib/apt/lists/*

# Install the required Python packages
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the repository code into the container
COPY . .

# Install the 'segment-anything' package
RUN pip install -e .

# Set the entrypoint to a shell to allow user interaction
ENTRYPOINT ["/bin/bash"]

requirements.txt

torch
torchvision
timm
einops
matplotlib
opencv-python
pycocotools
svgwrite
numpy
svgpathtools

It's still the same error,Maybe I can really try use Docker image

JingJieMa avatar Apr 10 '23 01:04 JingJieMa