Drag-Your-Gaussian
Drag-Your-Gaussian copied to clipboard
[SIGGRAPH 2025] Officially implement of the paper "Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting".
[SIGGRAPH 2025] Drag-Your-Gaussian
Official implementation of the paper:
βDrag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting.β
π TL;DR
DYG allows intuitive and flexible 3D scene editing by enabling users to drag 3D Gaussians while preserving fidelity and structure.
π₯ Introduction Video
https://github.com/user-attachments/assets/1e484ff9-f44c-4995-a99d-453cf0f11f95
Visit our Project Page for more examples and visualizations.
π§ Installation
Clone the repository:
git clone https://github.com/Quyans/Drag-Your-Gaussian.git
cd Drag-Your-Gaussian
git submodule update --init --recursive
Create a new conda environment:
conda env create --file environment.yaml
conda activate DYG
π Data Preparation
Follow 3DGS for reconstruction.
We recommend setting the spherical harmonic degree to 0.
Alternatively, you can use our prepared example data.
Example structure (e.g., face scene):
βββ data
βββ face
βββ export_1
β βββ drag_points.json
β βββ gaussian_mask.pt
βββ image
βββ sparse
βββ point_cloud.ply
π Diffusion Prior
We use LightningDrag as the diffusion prior. Follow LightningDrag Installation Guide to download required models.
Organize them as follows:
βββ checkpoints
βββ dreamshaper-8-inpainting
βββ lcm-lora-sdv1-5/
β βββ pytorch_lora_weights.safetensors
βββ sd-vae-ft-ema/
β βββ config.json
β βββ diffusion_pytorch_model.bin
β βββ diffusion_pytorch_model.safetensors
βββ IP-Adapter/models/
β βββ image_encoder
β βββ ip-adapter_sd15.bin
βββ lightning-drag-sd15/
βββ appearance_encoder/
β βββ config.json
β βββ diffusion_pytorch_model.safetensors
βββ point_embedding/
β βββ point_embedding.pt
βββ lightning-drag-sd15-attn.bin
π Training
π₯οΈ WebUI
Launch the WebUI:
python webui.py --colmap_dir <path_to_colmap> --gs_source <path_to_pointcloud.ply> --output_dir <save_path>
Example:
python webui.py --colmap_dir ./data/face/ --gs_source ./data/face/point_cloud.ply --output_dir result
You can train directly in the WebUI. Alternatively, after selecting drag points and masks, export the files and run:
python drag_3d.py --config configs/main.yaml --colmap_dir ./data/face/ --gs_source ./data/face/point_cloud.ply --point_dir ./data/face/export_1/drag_points.json --mask_dir ./data/face/export_1/gaussian_mask.pt --output_dir result
π Citation
If you find our work useful, please cite:
@article{qu2025drag,
title={Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting},
author={Qu, Yansong and Chen, Dian and Li, Xinyang and Li, Xiaofan and Zhang, Shengchuan and Cao, Liujuan and Ji, Rongrong},
journal={arXiv preprint arXiv:2501.18672},
year={2025}
}
π License
This project is licensed under the CC BY-NC-SA 4.0.
The code is intended for academic research purposes only.
π¬ Contact
For any questions or collaborations, feel free to contact:
π§ [email protected]