CreamyLong

Results 37 comments of CreamyLong

> Here's the code that I'm running > > ``` > tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", cache_dir=cache_dir) > model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", cache_dir=cache_dir) > system_message = "You are a helpful, respectful and honest...

I recommend you have a look at Milvus

I have the same problem and I did not get clean pictures by using DDIM in guided-diffusion. what is the reason why the DDIM or dpm does not work?

> Has anybody trained a model for the layout2image task yet? I'm not quiet sure how my Bounding boxes input is supposed to look like. And what a proper configuration...

> Also waiting for the release of the pretrained layout-to-image model trained from scratch on COCO and the dataet code. Thanks !! I found it trained on openimages256 wget -O...

It may help you https://github.com/CreamyLong/stable-diffusion/blob/master/scripts/layout2img.py