unsatisfied model generated for object mainly in white color
It seems that objects predominantly colored in white, such as toilet paper, are not detected effectively, possibly due to the model being trained primarily with images featuring white backgrounds. Conversely, the model performs well with objects that are not primarily white. Could you suggest any potential solutions to address this issue?
Yes, this is indeed a problem of using white background. The original Zero123++ generates gray background, which will meet the similar problem if the image foreground contains gray-color area in my opinion. By far we cannot do much to this issue, maybe you can try using some image processing tools or image-to-image style translation generative models to make the highlight area in the input image darker. We will try to fix this issue in the next version.