Amazon Sagemaker deployment support
Is your feature request related to a problem? Please describe.
I’d like to deploy Hunyuan3D-2 as an inference endpoint on Amazon SageMaker, but there’s currently no guide or native support for this.
Describe the solution you’d like
A deployment guide or example showing how to package and run Hunyuan3D-2 on a SageMaker endpoint (e.g., using a PyTorch model or custom inference script).
Describe alternatives you’ve considered
Using a custom container or manually modifying the inference script to run on a SageMaker model endpoint, but guidance would help streamline the process.
Additional context
Many users in cloud environments may prefer SageMaker for managed inference. Supporting this could expand adoption.
Thanks!