Unique3D all stage bad effect
I use the picture offered by examples_input, but the result is like this, far from the demo, the body is always incomplete. who can tell me wyh, plz?
I think the problem occurred in the last step, because the previous stages's result looks very positive. This bug also occurred in CharacterGen_To_Unique3D workflow, what can I do to fix it? plz.
I have the same problem, both in the Unique3d workflow and in the CharacterGen_to_Unique3D Workflow. I have also discovered something strange. I think that the colors of the image that is used as a base influence the result.
Let me explain.
This image produces a chaotic result:
But if I modify the green color component in Gimp, and I use it again to generate the 3D character, the result is optimal:
This solution does not always work for me, and I cannot be modifying the color of the designs either.
I hope that this contribution sheds some light on the solution to this problem.
Best regards and congratulations on this phenomenal work.
I have the same problem, both in the Unique3d workflow and in the CharacterGen_to_Unique3D Workflow. I have also discovered something strange. I think that the colors of the image that is used as a base influence the result.我在 Unique3d 工作流程和 CharacterGen_to_Unique3D 工作流程中都有同样的问题。我还发现了一些奇怪的事情。我认为用作基础的图像的颜色会影响结果。
Let me explain. 让我解释一下。
This image produces a chaotic result:该图像产生混乱的结果:
![]()
But if I modify the green color component in Gimp, and I use it again to generate the 3D character, the result is optimal:但如果我修改 Gimp 中的绿色分量,并再次使用它来生成 3D 角色,结果是最佳的:
![]()
This solution does not always work for me, and I cannot be modifying the color of the designs either.这个解决方案并不总是适合我,而且我也无法修改设计的颜色。
I hope that this contribution sheds some light on the solution to this problem.我希望这篇文章能够为解决这个问题提供一些线索。
Best regards and congratulations on this phenomenal work.对这项非凡的工作致以最诚挚的问候和祝贺。
Thank you for your advice. The model I got after running Asuka Pictures, I got good results after using 4090 instead of the previous 3070. However, when I used an image downloaded from the Internet, the result was still poor. I tried what you said about increasing the green weight, and there was no noticeable boost for me.
Another example with the test image that the program manages.
With the original the result is wrong:
But if I adjust the green color to that file, the result comes out much better. The generated 3D figure only needs a foot and a hand. It is possible that if I adjust the green color a little more, the 3D figure will come out perfect.
I do not mean that it is a matter of modifying the colors of the images. I think that there is something that is missing in the programming of the nodes, and what I am explaining may be a clue.
To prove that it is the same error as the one posted by user AstroWYH, I perform the test in the Unique3D_All_Stages workflow, with the two images together, the original and its copy with the modified colors (green).
To prove that it is the same error as the one posted by user AstroWYH, I perform the test in the Unique3D_All_Stages workflow, with the two images together, the original and its copy with the modified colors (green).为了证明它与用户 AstroWYH 发布的错误相同,我在 Unique3D_All_Stages 工作流程中执行测试,将两个图像(原始图像及其具有修改颜色(绿色)的副本)放在一起。
Looks good. I'll go back and try again. Thank you for reminding me.
To prove that it is the same error as the one posted by user AstroWYH, I perform the test in the Unique3D_All_Stages workflow, with the two images together, the original and its copy with the modified colors (green).
In addition, I found that the clearer the image, the better the result, although not to the extent of Asuka so far, but I increased the sharpness of the image, the model at least looks like a person
Interesting, I didn't recall have this issue when I implemented it, let me give a try. In the meantime, if you are trying to generate character then using CharaterGen workflow is way better
I just tried again, seems @mutantesde experiments is accurate, also from what I saw in Unique3D's Github issues, the model generally performs better at "thick character", if some part of object in the input image is too thin (e.g. Asuka's leg), then last stage of optimization method may preform badly (well, optimization-based mesh reconstruction from 4 predicted normal images can only get you so far)
I have found a solution for those of us who have problems with the mesh generated in the Unique3D_All_Stages workflow.
It is as simple as entering the images at 2048 x 2048. When the design is smaller than those dimensions, I create a transparent canvas of 2048 x 2048 pixels in Gimp, and I paste my image inside to then scale it.
When I enter the 2048x2048 image in the Unique3D_All_Stages workflow, all the results I have obtained have been very good.
Using Original Image:
Original image scaled to 2048x2048 before entering it into the workflow.:
I hope you try this solution on your machines and comment.
I am now testing with the CharacterGen_to_Unique3D workflow, which also generates the wrong mesh on my computer. I will let you know if I find a solution.
@MrForExample Thank you very much for sharing this great work you are programming.
Greetings.
I have found a solution for those of us who have problems with the mesh generated in the Unique3D_All_Stages workflow.对于在 Unique3D_All_Stages 工作流程中生成的网格有问题的人,我找到了解决方案。
It is as simple as entering the images at 2048 x 2048. When the design is smaller than those dimensions, I create a transparent canvas of 2048 x 2048 pixels in Gimp, and I paste my image inside to then scale it.就像输入 2048 x 2048 的图像一样简单。当设计小于这些尺寸时,我在 Gimp 中创建一个 2048 x 2048 像素的透明画布,然后将图像粘贴到其中,然后对其进行缩放。
When I enter the 2048x2048 image in the Unique3D_All_Stages workflow, all the results I have obtained have been very good.当我在 Unique3D_All_Stages 工作流程中输入 2048x2048 图像时,我获得的所有结果都非常好。
Using Original Image: 使用原始图像:
Original image scaled to 2048x2048 before entering it into the workflow.:原始图像在进入工作流程之前缩放至 2048x2048。:
I hope you try this solution on your machines and comment.我希望您在您的机器上尝试这个解决方案并发表评论。
I am now testing with the CharacterGen_to_Unique3D workflow, which also generates the wrong mesh on my computer. I will let you know if I find a solution.我现在正在使用CharacterGen_to_Unique3D工作流程进行测试,该工作流程也在我的计算机上生成了错误的网格。如果我找到解决方案,我会通知您。
@MrForExample Thank you very much for sharing this great work you are programming.非常感谢您分享您正在编程的这项伟大工作。
Greetings. 问候。
Thank you for your new discovery, I am also constantly trying, in general, larger sizes and clearer pictures, will get relatively better results.
A new problem I encountered was that the.mtl file always had only so much content. Is that right? Because I found.obj and.mtl in my output, and then I put them in blender, and the model of the character doesn't have any color. I searched and found that it was because the.mtl file didn't work, and then I found that all the.MTLs had this content, very little. Something feels wrong.





