dream-creator
dream-creator copied to clipboard
Training a model without resulting in animal faces.
I'm using "https://github.com/ProGamerGov/neural-dream/tree/dream-creator-support" to augment & warp images I have to create new interesting images.
What I want to do is convey a mood into a photo. eg ie I have 1000 smiling faces (only 1 class) I want to train a model with just these faces and then use dream creator to apply this model to an image and start to see the mood "twist" into the photo.
Issue: I keep getting residual animals coming through in my twisted images I apply my dream-creator model to. How can I train a model on ONLY my images and not take any pre trained data in?
@Bird-NZ That's a very tricky issue I think, as you are exploring largely uncharted territory. Are you only training with one class? And are you visualization specific channels?
I have a couple ideas:
-
When training models, you could try unfreezing some of the lower frozen layers, but that may ruin the model's ability to generate coherent visualizations.
-
You could also try to simply block the channels that are responsible for the animal faces, but that could be tedious and potentially difficult to do.
-
You could find some way for the training images to teach the model that animal faces are not what you want. Like using multiple classes.
Thanks for you quick feedback on this. It's a super interesting challenge but I think perhaps possible to make some interesting art from it. Question: So for option 1. When training models use the "-freeze_to" option in building the model to a lower level like "conv1" (or would "none" be better to use?)
@ProGamerGov perhaps it would be better to build a model from scratch myself, all be it, with limited classes? If so could you point me to a framework or process to create this?
@Bird-NZ I'm not sure what framework you would use to make it easier? PyTorch Lightning might be more user friendly, but idk how easy it would be to setup the InceptionV1 / Inception5h model for it.
You could also try experimenting with creating your own visualization code.
@Bird-NZ I could get human face components after editing images in my dataset using code from here 'https://github.com/vitoralbiero/img2pose'. Basically, this code can detect, align, and crop face so the focus of the dataset will be on the face components and you will get rid of extra information around the face. By the way, I used CelebA dataset and train it from conv1 using sgd (-batch_size 64 -num_epochs 1000 -train_workers 2 -val_workers 2 -save_epoch 60 -optimizer sgd -freeze_to conv1).