ALAE icon indicating copy to clipboard operation
ALAE copied to clipboard

Demo on remote server

Open VSehwag opened this issue 4 years ago • 5 comments

Is it possible to support the functionality where we can run the interactive demo on a remote server (similar to tensorboard)? I have GPUs available only on a headless server, which might be the case for many others.

Thanks.

VSehwag avatar Apr 26 '20 00:04 VSehwag

I'm currently looking into the possibility of making a remote version of bimpy. But that requires some work. Interactive demo meant to run on a local machine with a desktop environment. You can run it on any machine with a decent gaming GPU. Here https://github.com/podgorskiy/ALAE/issues/26 it seems even to run on GTX 970.

On the other hand, there is a bunch of scripts (in make_figures folder) for making various figures that can be run on a remote server.

podgorskiy avatar May 02 '20 15:05 podgorskiy

No worries. I managed to distill the interactive_demo.py to a jupyter notebook without the dependency on bimpy. Its not interactive but easily serve the same purpose with some loops and other hacks.

Just one question on reconstructions: Should we expect the reconstruction to be highly similar to original image if the image is from the training set of FFHQ? I know that for real-world images, which we use in the demo, it's not the case. But even for FFHQ images, I found the reconstruction to be quite different (but of course high quality) than original image. In particular, for my project, I am trying to find a set of images, where the reconstruction if very close (in identity) to the original image.

VSehwag avatar May 02 '20 16:05 VSehwag

Could you please give a link to jupiter notebook? Or you can post a link here: https://github.com/podgorskiy/ALAE/issues/13 . It could be useful for others.

Reconstructions are expected to be similar. Though, keep in mind that 1024x1024 image is compressed down to a 512 element vector. It tries to make a reconstruction that is semantically as close as possible. Though, it knows nothing about what features of a human face are important to preserve the identity. Even some very slight change in the face may result in an unrecognizable person. So, yes, people indeed look as different persons. But the overall picture is very similar though.

Enforcing higher priority for important facial features is definitely possible, but it is out of the scope of this work.

podgorskiy avatar May 02 '20 16:05 podgorskiy

The notebook without bimpy dependency is available at https://github.com/VSehwag/ALAE/blob/master/replicate_results.ipynb.

So far, I am quite intrigued by the visualization obtained across a diverse set of images. However, as I mentioned earlier in issue #16, it's still a bit unclear on how the principal directions vectors for attributes are obtained. In particular, given the unsupervised nature of training data, how are we able to find direction for attributes like smile, sunglasses, etc? Would it be possible to have a short discussion on it offline?

VSehwag avatar May 03 '20 02:05 VSehwag

@VSehwag Thank you for posting this. Super useful.

daxiongshu avatar Jan 09 '21 18:01 daxiongshu