Results 11 issues of chuck ma

I am very interested in training a new controlnet model. After studying the kandinsky-2-2-controlnet-depth model uploaded in HuggingFace, I found that its architecture seems to be different from the controlnet...

Я очень заинтересован в обучении новой модели ControlNet. После изучения загруженной модели kandinsky-2-2-controlnet-depth в HuggingFace, я обнаружил, что ее архитектура отличается от модели ControlNet традиционной стабильной диффузии. По моему пониманию,...

I found that an error occurs when guidance_scale

I am currently using onediff, and the total time for compile_pipe and load_pipe is about 30 seconds (even after the first compilation and save_pipe). This duration is too exaggerated in...

sig-compiler

https://github.com/guoyww/AnimateDiff/assets/74402255/e948d822-942d-4e34-b59d-db091f4594d1 I use DynaVisionXL as base model, others are all same as default config. width: 1344 height: 768 prompt: "(symbolism art designed by Edward OkuÅ:1.1) , [Airy:Amiga 500 Style:5] a...

"I'm using RGB SparseCtrl and AnimateDiff v3. The color of the first frame is much lighter than the subsequent frames. Consequently, if I continuously loop the last frame as the...

### Your current environment information PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OneFlow version: path: ['/root/miniconda3/lib/python3.10/site-packages/oneflow'], version: 0.9.1.dev20240731+cu118,...

Request-bug

https://github.com/pkhungurn/talking-head-anime-2-demo 阅读了上面的代码,发现其生成对应的脸部的主要方法是通过 arkit的blendshape 为驱动,生成对应的脸部动画 所以想知道,怎么基于 landmarks,生成对应的blendshape ? 网上似乎没有开源的模型或项目可以实现这一点。 https://github.com/google/mediapipe 开源了根据图片生成 blendshape 的模型,但是这个模型得到的blendshape值看起来有很多问题

It'd be great to have XLabs IPadapter supported in diffusers. Code: https://github.com/XLabs-AI/x-flux/ Checkpoint: https://huggingface.co/XLabs-AI/flux-ip-adapter/tree/main @sayakpaul

contributions-welcome
IPAdapter