FastSAM
FastSAM copied to clipboard
Fast Segment Anything
尽管推理速度很快,但在promt 阶段有个函数严重拖累了,导致每秒只能处理3张图。 plot_to_result plot_to_result cost too much time (round 300ms per )  请问有什么方法优化么?
Hi, I've trained the model on custom dataset, I was wondering which script I need to use to make inference?
Hi, The license for FastSAM previously was Apache 2.0 and now it shows AGPL 3.0. Can we still use this in commercial product? Thanks
File prompt.py: ```python try: buf = fig.canvas.tostring_rgb() except AttributeError: fig.canvas.draw() buf = fig.canvas.tostring_rgb() cols, rows = fig.canvas.get_width_height() img_array = np.frombuffer(buf, dtype=np.uint8).reshape(rows, cols, 3) result = cv2.cvtColor(img_array, cv2.COLOR_RGB2BGR) plt.close() return result...
Thanks for your awesome project, Does it have a web demo like SAM https://github.com/facebookresearch/segment-anything/tree/main/demo
Hello, thank you for providing such excellent results. I want to display only the contour of the segmentation result without the color mask. How should I modify the code?
First of all, let me thank and congratulate the authors for this wok. I feel it is a smart way of combining what SAM and the existing semantic segmentation work...
In the postprocess method of FastSAMPredictor, when critical_iou_index is > 1, it raises the following error: ``` Error: expand(torch.FloatTensor{[2]}, size=[]): the number of sizes provided (0) must be greater or...
Import clip to "prompt.py" and "Inference.py" to avoid error "name 'clip' is not defined"