sp1cae
sp1cae
ValueError: PhotoMaker currently does not support multiple trigger words in a single prompt. Trigger word: img, Prompt: anime artwork illustrating The car on the road, near the forest . created...
除了降低分辨率,步数,等参数设置,还有什么其他加速方法吗比如支持lcm,animateDiff-Lightning
The control model has been released, can you support it? I really need it,please canny: https://huggingface.co/XLabs-AI/flux-controlnet-canny-v3 depth: https://huggingface.co/XLabs-AI/flux-controlnet-depth-v3 ip-adapter: https://huggingface.co/XLabs-AI/flux-ip-adapter union: https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union
感谢作者的杰出贡献! 四视图模型相比较与其他的四试图模型已经很好了,但是与一些六视图模型比,一些面的预测比较差. 请问什么时候会发布更新的模型
我看到app.py代码里有限制,使用app.py输出的视频最长只能10s,是否可以手动修改,或者有其他方法
因为我看到 AI-Toolkit。配置中有一个标志,可以将其切换为训练 Schnell 而不是 Dev。 如果支持,请问该如何设置? 以下是我的训练参数设置 model_train_type = "flux-lora" pretrained_model_name_or_path = "/lora-scripts/sd-models/flux1-schnell.safetensors" ae = "/lora-scripts/sd-models/ae.safetensors" clip_l = "/lora-scripts/sd-models/clip_l.safetensors" t5xxl = "/lora-scripts/sd-models/t5xxl_fp16.safetensors" timestep_sampling = "sigmoid" sigmoid_scale = 1 model_prediction_type =...
### 起始日期 | Start Date _No response_ ### 实现PR | Implementation PR 按照cookbook的版本安装的vllm| https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_5_vllm_zh.md ,报错ValueError: Currently, MiniCPMV only supports versions 2.0, 2.5, 2.6, 4.0. Got version: (4, 5),是不是没来得及更新vllm版本呀 ### 相关Issues...