ComfyUI-Dwpose-Tensorrt
                                
                                
                                
                                    ComfyUI-Dwpose-Tensorrt copied to clipboard
                            
                            
                            
                        Ultra fast dwpose estimation inside comfyui using tensorrt
  
This project provides a Tensorrt implementation of Dwpose for ultra fast pose estimation inside ComfyUI
This project is licensed under CC BY-NC-SA, everyone is FREE to access, use, modify and redistribute with the same license.
For commercial purposes, please contact me directly at [email protected]
If you like the project, please give me a star! ⭐
⏱️ Performance
Note: The following results were benchmarked on FP16 engines inside ComfyUI, using 1000 similar frames
| Device | FPS | 
|---|---|
| L40s | 20 | 
🚀 Installation
Navigate to the ComfyUI /custom_nodes directory
git clone https://github.com/yuvraj108c/ComfyUI-Dwpose-Tensorrt
cd ./ComfyUI-Dwpose-Tensorrt
pip install -r requirements.txt
🛠️ Building Tensorrt Engine
- 
Download the following onnx models:
 - 
Build tensorrt engines for both of these models by running:
python export_trt.py
 - 
Place the exported engines inside ComfyUI
/models/tensorrt/dwposedirectory 
☀️ Usage
- Insert node by 
Right Click -> tensorrt -> Dwpose Tensorrt 
🤖 Environment tested
- Ubuntu 22.04 LTS, Cuda 12.4, Tensorrt 10.2.0.post1, Python 3.10, L40s GPU
 - Windows (Not tested, but should work)
 
👏 Credits
- https://github.com/IDEA-Research/DWPose
 - https://github.com/legraphista/dwpose-video
 
License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)