Semantic-Segment-Anything
Semantic-Segment-Anything copied to clipboard
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
Hi All, Thank you for your amazing work and repo! I'm trying to inference the open vocab model by some random image. I followed the installation instructions and completed them...
I must say bravo and thank you for doing exactly what I would like to start doing now.
Thanks for sharing this great work! Can we define our labels or apply transfer learning to this project? I did not figure out how do we run this project if...
Can we get the mask only showing a human body? Not with all the other elements in the image/video
Great Job!!! How about creating an environment without using Conda, such as using Python from virtualenv, and what libraries are needed?
Hi, I was thinking if it is possible to have a single image as input, apply the Segment Anything Model from Meta and the use this tool to get the...
Hi Really appreciate this work. Do you plan on releasing model weights/checkpoints for SSA? Would really appreciate it. Thanks
python scripts/main.py --data=image_20230408/ --out_dir output --world_size=1 --save_img Traceback (most recent call last): File "scripts/main.py", line 4, in from pipeline import semantic_annotation_pipeline File "/data2/queenie_2023/Semantic-Segment-Anything/scripts/pipeline.py", line 13, in from blip import open_vocabulary_classification_blip...
Looks like, with SSA, it becomes possible to compare the performances of SAM against SOTA on popular benchmark datasets. Would you report the validation results of SSA for semantic segmentation...
Hi, could you please explain the difference between this work and Grounded SAM (https://github.com/IDEA-Research/Grounded-Segment-Anything/tree/main)?