为什么要在推理时复制一份latent进行推理,可以移除吗?
在进行推理时: 为什么要先做 https://github.com/hpcaitech/Open-Sora/blob/a37a189482a4cd1c7892aa06881e539cbf8078ce/opensora/schedulers/iddpm/init.py#L66 再做 https://github.com/hpcaitech/Open-Sora/blob/a37a189482a4cd1c7892aa06881e539cbf8078ce/opensora/schedulers/iddpm/init.py#L83 手动移除有什么弊端吗?
我留意到y_null的引入:
https://github.com/hpcaitech/Open-Sora/blob/a37a189482a4cd1c7892aa06881e539cbf8078ce/opensora/schedulers/iddpm/init.py#L69
但是在样本级别的拼接为什么会影响输出样本?
我留意到
y_null的引入:https://github.com/hpcaitech/Open-Sora/blob/a37a189482a4cd1c7892aa06881e539cbf8078ce/opensora/schedulers/iddpm/init.py#L69
但是在样本级别的拼接为什么会影响输出样本?
涉及到classifier-free guidance(cfg)的相关知识,你可以参考下链接中的forward_with_cfg函数,包含了样本间输出的融合
This issue is stale because it has been open for 7 days with no activity.
This issue was closed because it has been inactive for 7 days since being marked as stale.