Drag-Your-Gaussian icon indicating copy to clipboard operation
Drag-Your-Gaussian copied to clipboard

[SIGGRAPH 2025] Officially implement of the paper "Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting".

[SIGGRAPH 2025] Drag-Your-Gaussian

Paper PDF Project Page

Official implementation of the paper:
β€œDrag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting.”

😊 TL;DR

DYG allows intuitive and flexible 3D scene editing by enabling users to drag 3D Gaussians while preserving fidelity and structure.

πŸŽ₯ Introduction Video

https://github.com/user-attachments/assets/1e484ff9-f44c-4995-a99d-453cf0f11f95

Visit our Project Page for more examples and visualizations.

πŸ”§ Installation

Clone the repository:

git clone https://github.com/Quyans/Drag-Your-Gaussian.git
cd Drag-Your-Gaussian
git submodule update --init --recursive 

Create a new conda environment:

conda env create --file environment.yaml
conda activate DYG

πŸ“š Data Preparation

Follow 3DGS for reconstruction.
We recommend setting the spherical harmonic degree to 0.

Alternatively, you can use our prepared example data.
Example structure (e.g., face scene):

└── data
    └── face
        β”œβ”€β”€ export_1
        β”‚   β”œβ”€β”€ drag_points.json
        β”‚   └── gaussian_mask.pt
        β”œβ”€β”€ image
        β”œβ”€β”€ sparse
        └── point_cloud.ply

πŸ”„ Diffusion Prior

We use LightningDrag as the diffusion prior. Follow LightningDrag Installation Guide to download required models.
Organize them as follows:

└── checkpoints
    β”œβ”€β”€ dreamshaper-8-inpainting
    β”œβ”€β”€ lcm-lora-sdv1-5/
    β”‚   └── pytorch_lora_weights.safetensors
    β”œβ”€β”€ sd-vae-ft-ema/
    β”‚   β”œβ”€β”€ config.json
    β”‚   β”œβ”€β”€ diffusion_pytorch_model.bin
    β”‚   └── diffusion_pytorch_model.safetensors
    β”œβ”€β”€ IP-Adapter/models/
    β”‚   β”œβ”€β”€ image_encoder
    β”‚   └── ip-adapter_sd15.bin
    └── lightning-drag-sd15/
        β”œβ”€β”€ appearance_encoder/
        β”‚   β”œβ”€β”€ config.json
        β”‚   └── diffusion_pytorch_model.safetensors
        β”œβ”€β”€ point_embedding/
        β”‚   └── point_embedding.pt
        └── lightning-drag-sd15-attn.bin

πŸš‹ Training

πŸ–₯️ WebUI

Launch the WebUI:

python webui.py --colmap_dir <path_to_colmap> --gs_source <path_to_pointcloud.ply> --output_dir <save_path>

Example:

python webui.py --colmap_dir ./data/face/ --gs_source ./data/face/point_cloud.ply --output_dir result

You can train directly in the WebUI. Alternatively, after selecting drag points and masks, export the files and run:

python drag_3d.py --config configs/main.yaml                   --colmap_dir ./data/face/                   --gs_source ./data/face/point_cloud.ply                   --point_dir ./data/face/export_1/drag_points.json                   --mask_dir ./data/face/export_1/gaussian_mask.pt                   --output_dir result

πŸ“– Citation

If you find our work useful, please cite:

@article{qu2025drag,
  title={Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting},
  author={Qu, Yansong and Chen, Dian and Li, Xinyang and Li, Xiaofan and Zhang, Shengchuan and Cao, Liujuan and Ji, Rongrong},
  journal={arXiv preprint arXiv:2501.18672},
  year={2025}
}

πŸ“„ License

This project is licensed under the CC BY-NC-SA 4.0.
The code is intended for academic research purposes only.

πŸ“¬ Contact

For any questions or collaborations, feel free to contact:
πŸ“§ [email protected]