AnimateDiff
AnimateDiff copied to clipboard
[INFORMATION REQUEST] Action words that v3 motion module can effectively respond to?
Does anyone have a list of motion training tag that v3 (or any of the earlier motion models) was trained on and can effectively respond to. I find myself doing a hit and miss when running prompts. Some things, like breakdancing, are bizarre while other things like skydiving seem to register well.
I had more success with using controlnets. Find someone who is breakdancing and use that as an input (maybe openpose, canny or depth)
Do you know if the motion loras for V2 work for V3 as well?
Yes, you could use controlnet with vid2vid and stylize someone breakdancing. Then why have a motion model at all? What if I want three pandas dancing on top of a bus? I would then have to take a video of three dancers on a platform. The power of prompt to image or video is that you get all kinds of creative flexibility. It's the same reason I personally advocate against motion LoRAs that then require you to separate out parameters from your prompt text. Why not train the motion model on panning and zooming so you could then include that in your prompt?
Back to the original question: What was the motion model trained on and what does it understand?
Do you know if the motion loras for V2 work for V3 as well?
I don't think it does based on my tests.
Do you know if the motion loras for V2 work for V3 as well?
I don't think it does based on my tests.
Yes, the author said motion loras only work for V2.