MagicSource

Results 1289 comments of MagicSource

目前只是single person吗。同于张图片,看起来跟原版差距多大

It's not work ![image](https://github.com/InstantID/InstantID/assets/21303438/81592689-d12f-490b-a821-d0d73eeab14d)

Really bad quality. ![image](https://github.com/InstantID/InstantID/assets/21303438/85c766cf-fe51-46c7-b6b6-f8c5f6609e07)

@bananaguys Dude3, your images are not uploading correctly..

Deos MoE-LLaVA-Qwen available?

AWQ enables model save when it quantized, so that users only need to download the int4 weights. By saying no need to perserve hight-end GPU normally means we can't load...

Am using the latest one, Also, why have to using deepspeed? For single GPU is that necessary? Got a lot of trouble when runing this deepspeed inference ``` return torch.distributed.all_to_all_single(output=output,...

Please avoid using deepspeed at the moment, there was recently reported bug related to deepspeed and nccl: https://github.com/NVIDIA/nccl/issues/1051 And unfortunatelly, it might related to torch 2.1 as well. So, if...

It was merged to transformers lastest now. Using HF's MoE could reduce many weired problems and make the code cleaner.