dreamyou070
dreamyou070
Hi, I want to follow training AnimateDiff. In the tutorial (https://github.com/guoyww/AnimateDiff/blob/main/__assets__/docs/animatediff.md) first I finetune with configs/training/v1/image_finetune.yaml config. However in Paper, there is no unet finetuning process only Lora training for...
in the paper, you found the unimportant SD block/layer. In that case, you may not have to retrain the model (because if you erase unimportant block/layer, the performance is almost...
in the paper, you make image consistency model with lora, is that the lcm_lora ? 
when you train LCM_svd, you set svd_solver like, svd_solver = SVDSolver(args.N, noise_scheduler.config.sigma_min, noise_scheduler.config.sigma_max, 7,0.7, 1.6) why you change training timestep to args.N (for example 40), can I get advice about...
In the paper, T2V or I2V LCM model is distilled from T2I LCM model. But I thinks there is no training code or model... Can you check once ?
In the paper, you project motion magnitude to the position embedding. I am little confusing... motion magnitude means hoe the region (from the gt dataset video) is movable. is that...
when inference AnimateLCM I2V, the recommended size is (768,512) However, it is not possible to inference on A100 GPU. In the paper, you trained on A800 GPU.. is there any...