Apple MacOS Support
Please add support for Apple macOS. Expanding platform support to macOS will increase accessibility and user base.
Shape generation can be used in Macos without problem, you could try it via
export PYTORCH_ENABLE_MPS_FALLBACK=1
python3 gradio_app.py --device mps --enable_flashvdm
For requirements
pip install -r requirements.txt
No worry about texture requirements, you could safely skip it.
Shape generation can be used in Macos without problem, you could try it via
export PYTORCH_ENABLE_MPS_FALLBACK=1 python3 gradio_app.py --device mps --enable_flashvdmFor requirements
pip install -r requirements.txtNo worry about texture requirements, you could safely skip it.
Thanks.
Can it configured to run on "Pinokio"?
Could these instructions be clearly stated in the "read me"?
I was able to get the Shape generation to work on Mac M2 24G. (60 secs) Doing the initial instructions was not enough...
export PYTORCH_ENABLE_MPS_FALLBACK=1
python3 gradio_app.py --device mps --enable_flashvdm
I still got torch not compiled with cuda errors, so I made code modifications to change 'cuda' to 'mps'. Then in schedulers.py, add
+406 timesteps = timesteps.to(torch.float32)
The issue now is that If I want to try to get textures,
cd hy3dgen/texgen/custom_rasterizer
python3 setup.py install
will not work for Mac since this requires building a CUDAExtension which we cannot do on a Mac...
Is there an alternative to building a CUDAExtension?
timesteps = timesteps.to(torch.float32)
You do not need to do this if you use the latest torch and a mac that's not too old. With the PR I sent above, I'm generating with mini-turbo at around 30 sec on my M1 Max 64G.
Now only if we could generate texture on Macs as well......it would be crazy
Now only if we could generate texture on Macs as well......it would be crazy
Yeah! texture would be awesome
Hi All!
I also tried to run it on a MacBook with M2 processor and 64GB. I get the below result.
PyTorch is version 2.6.0 and ever step was followed for installation, all settings are kept at the default it's set to when opening the page.
Attempted the fix mentioned in scheduler.py which allowed me to generate anything
Also after some tinkering I got
which looks cool, and my eyes try to see a horse? (undid all changes with a git pul assuming maybe things where fixed)
@on-d-mand Have you tried installing "Hunyuan3D-2 mini-turbo" via the "Pinokio" installer? https://pinokio.computer/
Actually attempting that as we speak, maybe it offers a working solution! But of course the issue is than still related to this git which of course could work with the right input/commits
I just tried it through Pinokio, the MultiView Prompt works! Don't have time today to check out if single view works, but this is what I was looking for anyway! There's a difference in the calls I guess that are catered to Pinokio's setup.
My general idea how ever was to render many 3d models on a folder as a batch, which I can't find in the premade Pinokio interface (thus want to use a pyton script).. But I think that's out of scope here :-)
Thanks for the recommendation of Pinokio. It’s quite useful for various purposes, but unfortunately, it also generates cubes for me (singleview, I’m currently trying to use the multiview option).
@on-d-mand Thanks for sharing. "Pinokio" is indeed very useful.
@cocktailpeanut Users are reporting an sub-issue in regards to "single_image-to-3dmodel" generation; via this original and forked repository.
They claim the installer generates "cube/cubes" instead of the intending/expected mesh model.
Also, users report "multi-view" is working as expected.
Please advice.
@israelandrewbrown, that has been my experience on my M4 Pro Mac mini 64GB.
It generated the Petri dish looking thing once, then it was only ever cubes from then on.
Having same issues. Mini models don't quite work for me on M4, producing cubes, but Hunyuan3D-DiT-v2 and MV models work fine.
For anyone who needs it, here is a semi working flow:
- MPS/Apple Silicon accelerated Model Generation/Generate Mesh ✅
- Delight (Fully Broken, just outputs a black square... maybe something wrong with img2img?) ❌
- Custom rasterizer/Render Multiview ✅ (Ported cuda renderer to metal compute shader)
- Sample Multiview - Partially broken (only works with a few diffusion steps, otherwise the texture gets all grey and washed out)
Unfortunately don't have the bandwidth to clean this up, but if anyone needs it can replace your hy3dgen folder in the comfy ui extension "ComfyUI-Hunyuan3DWrapper": hy3dgen.zip
Hope this might be helpful to anyone working on a clean, fully working port!
For anyone who needs it, here is a semi working flow:
- MPS/Apple Silicon accelerated Model Generation/Generate Mesh ✅
- Delight (Fully Broken, just outputs a black square... maybe something wrong with img2img?) ❌
- Custom rasterizer/Render Multiview ✅ (Ported cuda renderer to metal compute shader)
- Sample Multiview - Partially broken (only works with a few diffusion steps, otherwise the texture gets all grey and washed out)
Unfortunately don't have the bandwidth to clean this up, but if anyone needs it can replace your hy3dgen folder in the comfy ui extension "ComfyUI-Hunyuan3DWrapper": [hy3dgen.zip](https://github.com/user-attachments/files/19859950/hy3dgen.zip)
Hope this might be helpful to anyone working on a clean, fully working port!
hi Cf
I have downloaded the zip you have provided and managed to make Hy3D Render MultiView node and Achieved to create "normal maps&position maps" though Hy3D Sample MultiView kept giving "Hy3DSampleMultiView Torch not compiled with CUDA enabled" warning
How have you managed this problem, I was only able to get Normals and Positions basically no texture maps
Thanks so much in advance
e
I love your model, but I also get the cubes when setting device to "mps", it does work with "cpu" though, but slow.
Setting os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" did not have any effect as far as I can tell.
I did try to set low_vram_mode=True, enable_flashvdm=True. Not sure if I did it the right way though (followed chatgpt advice, not sure if it was correct).
Would it be possible for you to provide a minimal program with the necessary hacks to make it work with Metal using Python 3.12, if it is possible to make it run on the Apple GPU at the moment?
Appreciate your hard work. The model is excellent!
(Apple M2 24GB, Sequioa, both mini-turbo and 2.1)
Rewrote the demo gradle app here I've also added a model switcher in the UI so you can switch between 2.0-Turbo, 2.0 Normal and 2.1 Normal, without the need to restart the app.
I haven't included the mini models, as anytime i used them with MPS enabled all you get was a cube
UV is required as I use that personally for my python package management - not conda. Just brew install it.
I've removed all Texture painting in this version - its purely for model generation, which is my personal use case.
I've also split the preview render to a separate chained task, so ShapeGen creates the shape, an STL is output (and downloadabel from the UI) at max face resolution, then its downscaled to glb and displayed in the app.
With the version of torch the start_mac.sh script (2.8) I get turbo model generation in 10 seconds at 256 octree.
Average 27 seconds for 512 octree
And H3d 2.1 is around 170 seconds at 5.5 cfg / 512 octree.
These times are on an M4 studio
I have also tested on an M4 mini, and an M1 studio
The version runs on anything (autodetects mps vs cuda vs cpu), and there are start scripts for all platforms Also - I changed the place models are actually loaded. The first time the ui is loaded, thats when turbo will be downloaded / loaded from cache on disk ($HOME/.cache/hugginface), and then per selection of a different model in the UI, it'll download it as it loads it.
Hopefully my PR that turns diso into a multi-architecture library, with mps and cpu backends will be merged and we'll all be able to benefit from being able to have accelerated support on Apple MPS :)
https://github.com/SarahWeiii/diso/pull/24
Unfortunately don't have the bandwidth to clean this up, but if anyone needs it can replace your hy3dgen folder in the comfy ui extension "ComfyUI-Hunyuan3DWrapper": [hy3dgen.zip](https://github.com/user-attachments/files/19859950/hy3dgen.zip)