Image2Paragraph
Image2Paragraph copied to clipboard
Out of Memory Issue in Semantic Segmentation
Why is it that when working on semantic segmentation, I constantly encounter out of memory errors, even though I have two GPUs with 15GB each? Is it possible to distribute the model workload across the GPUs in parallel?
The SAM itself is not heavy. But semantic segment anything requires four large model which is very memory consuming. At now, simply use --semantic_segment_device as 'CPU' to run. We are working on make this model lightweight now.
Hi, we have implement a light version.
Can be run on 8G GPU less than 20s.