Long(Tony) Lian

Results 82 comments of Long(Tony) Lian

Thanks for the suggestions! I'm working on that.

I just added an SDXL implementation integration. You can pull the repo and use `--sdxl` when you call `generate.py`. Can you try whether it works on your end? @alleniver @lossfaller

> > I just added an SDXL implementation integration. You can pull the repo and use `--sdxl` when you call `generate.py`. Can you try whether it works on your end?...

Currently, SDXL is supported through the refiner (`--sdxl`). SDXL base model support is possible (I have implemented that in another codebase before) but not implemented in this public codebase, as...

Well, this repo uses code from other repos, so the snippets adhere to their license (with their reference and license in comments). For the code that is not from other...

Yes. I tried Vicuna (https://github.com/lm-sys/FastChat) and it also works. However, proprietary models (e.g. gpt-3.5-turbo) are still better.

I tried LLaMA-2-7b and it seems to work: I used the non-chat version to perform the text completion.

Some initial attempts (you can improve by trying more options and seeds) ![image](https://github.com/TonyLianLong/LLM-groundedDiffusion/assets/1451234/6731d11b-2cca-4df2-9408-c03dd373ea1e) ![image](https://github.com/TonyLianLong/LLM-groundedDiffusion/assets/1451234/991c92a6-8b96-4ccc-b933-9f7d7cdd88e7) You may wonder why the man's face is weird. This is [a known artifact of stable...

Good question! This is why the space allows specifying a prompt for overall generation. Without it, you use a default prompt and don't get object interaction (SD will try to...

You can visualize the first stage of generation (i.e., individual box generation) to see if the appearance of the objects stays consistent. If they stay consistent, then having higher frozen...