awesome-pretrained-stylegan
awesome-pretrained-stylegan copied to clipboard
not an issue - question on project_images
Hi Justin,
maybe you can help me - my goal is to take a picture of myself and toonify myself.
I've compressed it into 3 steps
- align the imges
- project images
- blend networks
this line - step 2.
python project_images.py --num-steps 500 aligned generated
which is going to go through the aligned photos / after being processed from raw has some training component. I'm not clear on why or what this is doing only that the generated folder is showing a bunch of blurred images.
WHY do we need to Adding the images to tfrecords ...???? I want to take any image and find the associated npy.... do they all need to be added to tfrecords? and what does this actually do.
step 3 - just takes an array of npy file (the found vector of phot from latent space) and does the final step to make toon image. latent_dir.glob("*.npy")
https://gist.github.com/johndpope/55d4b8c33c3cac2c794373c4af7202ad
is there a faster / better way to just get the npy from an existing image?
it seems like the logic for step2 could be collapsed around spitting out the npz/npy file https://github.com/johndpope/stylegan2-ada/tree/digressions
python projector.py --outdir=out --target=targetimg.png \
--network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/ffhq.pkl
this creates 3 files in the out directory. targetimg.png / found latent image and the associated latent vector npy file.
this projector code is old one from 2019 https://github.com/justinpinkney/stylegan2/blob/master/projector.py
you should abandon stylegan2 and use the stylegan2-ada repo. I got it working with tensorflow2 https://github.com/johndpope/stylegan2-ada/blob/digressions/projector.py
UPDATE - I tested the updated code / it also does 1000 steps / ticks to find the correspnding npy file. When the toonify openapi do their processing - presumably they're not doing as many samples?