Results 11 comments of Arian

Im getting the same error on MacOS.

@TheLastBen I've tried with 3000 steps it worked, but i got the same results with 1800 steps on lr=3e-6. im using only 10 instance images. the quality have been better...

@TheLastBen every image shows almost like original ones now ![5129991595](https://user-images.githubusercontent.com/6691197/209471831-df04828a-ca1c-476e-a657-a676da7fe8a3.png)

@TheLastBen 200 steps on text encoder, 650 steps on unet, total 850 steps

@TheLastBen face not like the trained images it's completely different Unet > steps: 400 > lr: 2e-5 Text encoder: > steps: 350 > lr: 1e-6

me too i can't match the results! @TheLastBen how can i clone the October repo ?

@TheLastBen do we have to do prompt engineering, or just give it simple prompts ?

@TheLastBen I've trained text_enc ( l`earning_rate=1e-6, max_train_steps=350, lr_scheduler="polynomial"` ) UNet - ( `learning_rate=2e-5, max_train_steps=650, lr_scheduler="polynomial"` ) used 10 images. I've tested on kylie jenner and this is the best result...

> @ariandemnika don't add "person" in the prompt, you are reducing the weight of the trained subject @TheLastBen this just made it worse, you mean the UNet training `instance_prompt` argument...

@TheLastBen still not getting anything better.