Jack
Jack
> @gel-crabs Yeah, most of the U-Net forward hijacking functions won't work with this, It assumes the nearby step's effects are similar. > > Some more academical stuff: > >...
> pls remove LCM sampler for the time being. I am collaborating with the original author and implemented LCM sampler in my [AnimateDiff extension](https://github.com/continue-revolution/sd-webui-animatediff/blob/master/scripts/animatediff_lcm.py), the original author will debug line-by-line...
> > "batch_size": 1, > > "n_iter": 4, > > Try setting batch_size to 4 and leaving n_iter out. You don't need _all_ the parameters, just the one's you're using....
> don't add!you wil kill amd Or add an option for support AIT? Besides amd is supportted on normal stable diffusion webui?
> Hi there! Great work! > > Is it possible to run a batched inference? > > Thanks! Same question.
async_stream_infer maybe need a package_input?
> 这个问题应该是input的shape没对上吧。另外文件夹中没有模型只有脚本文件 模型interpreter.getSessionInput(session) 是一个动态值(1, 3, -1, -1),推理的时候真实输入是(1, 3, 800, 800)也报Can't run session because not resized,请问这是怎么回事? 
> 同步一下我的相关进展 > > # 最初的现象 > 在云服务上,全局初始化一次,进行多次推理,然后出现了错误:RuntimeError: could not execute a primitive > > # 后续尝试 > 1. 在本地的服务器上,同样的docker镜像,同样的代码,完全没问题,intel和amd型号的CPU都试了,都是新型号的家用CPU。 > 2. 最后在云服务上,**每次前向都重新进行模型的初始化**,然后就解决了这个问题。 这样耗时不会增加吗?有评估过影响多大吗?
> Btw Hi guys! I am newbie of stable diffusion webui. I don't know whether AITemplate is available on stable diffusion webui。Any plan to support it?
> Hey @antoche, the reason that SD 2.1 is slower is because it has to use upscale the attentions during inference as otherwise the model will generate `nan's` see: >...