Mingqiao Ye

Results 22 comments of Mingqiao Ye

Hi, it seems that the segment anything you load is not in sam-hq. You can use the following line of command and run it again. export PYTHONPATH=$(pwd)

> I'm getting the same and I made sure to uninstall the segment-anything: > > ``` > /home/user/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object...

Hi, the COCOapi calculates precision and recall for a given number of maximum detections. The most common used maximum number for Average Precision (AP) is 100. We can also save...

Hi, we select the mask with the highest score for multi-mask output. The corresponding code is in [this line](https://github.com/SysCV/sam-hq/blob/main/segment_anything/modeling/mask_decoder_hq.py#L140). It is also used in COCO evaluation or other quantitative evaluations.

Hi, we use cascade rcnn with [this config](https://github.com/facebookresearch/detectron2/blob/main/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_h_100ep.py). And for evaluation, we simply use all pred bbox as prompt without combining score or using output mask as another prompt.

Hi, we provide [COCO evaluation code here](https://drive.google.com/file/d/12ew9MoQuM72V9gEopBnjT8GdzhT68wjg/view?usp=sharing). You can put it in the folder ```sam-hq/eval_coco``` and test on single or multi GPU. We modify the evaluation code from [Prompt-Segment-Anything](https://github.com/RockeyCoss/Prompt-Segment-Anything). You...

> > Hi, we provide [COCO evaluation code here](https://drive.google.com/file/d/12ew9MoQuM72V9gEopBnjT8GdzhT68wjg/view?usp=sharing). You can put it in the folder `sam-hq/eval_coco` and test on single or multi GPU. > > We modify the evaluation...

Hi, self.embedding_maskfeature is used for adding some trainable parameters to the learning of maskfeature. Since the maskdecoder is fixed and we find that adding learnable parameter on maskfeature can improve...

This looks like a torch version problem. Different versions have different definitions between `local_rank` and `local-rank`. You could try one of these methods. 1. Use a lower version of PyTorch...

Hi, we use an earlier version of UVO v0.5. It has class agnostic label and image/frame set.