Tensorflow-MultiGPU-VAE-GAN icon indicating copy to clipboard operation
Tensorflow-MultiGPU-VAE-GAN copied to clipboard

PrettyTensor compatibility

Open leotam opened this issue 7 years ago • 6 comments

Really nice repo but appears prettytensor support is somewhat broken due to fast changing TF api

Some issues similar to https://github.com/google/prettytensor/issues/46

And the data is a bit harder to locate now due to downed links. The attributes labels can be found here at least: https://s3.amazonaws.com/cadl/celeb-align/list_attr_celeba.txt

leotam avatar Mar 24 '17 23:03 leotam

Great thanks! I've since rewritten a bunch of this to do a number of things like getting rid of prettytensor, implementing deepmind's up-convolutions tweaks, but need to clean up the code, and reapply it to the faces dataset. I will try and update the repo as soon as I get a little bit of time!

timsainb avatar Mar 24 '17 23:03 timsainb

Is there any chance of the updated code being uploaded soon? Also I would add links to the original CelebA site, because of quickly changing download URLs.

alexrakowski avatar May 16 '17 11:05 alexrakowski

Hi all, sorry I've been a bit swamped with research the past few months. I don't quite have time to rebuild and debug, but here are some quick updates you can implement. The examples here are from a different network with a slightly different architecture, so a few changes would be needed.

To get rid of pretty tensor in layer creation, replace the pt.wrap layers with tf.layers layers. For example:

def encoder(X):

    net = tf.reshape(X, [batch_size, dim1, dim2, dim3])
    net = layers.conv2d(net, 32, 5, stride=2)
    net = layers.conv2d(net, 64, 5, stride=2)
    net = layers.conv2d(net, 128, 5, stride=2)
    net= layers.conv2d(net, 256, 5, stride=2)
    net =  layers.flatten(net)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, hidden_size, activation_fn=None)
    
    return net

To use the resize deconvolutions discussed in here, replace deconv with something like this

def generator(Z):
    net = layers.fully_connected(Z, 4000)
    net = layers.fully_connected(net, 4000)
    net = tf.reshape(layers.fully_connected(net, 4*4*256), [batch_size, 4,4,256])
    net = tf.image.resize_nearest_neighbor(net, (8,8))
    net = layers.conv2d(net, 256, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (16,16))
    net = layers.conv2d(net, 128, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (32,32))
    net = layers.conv2d(net, 32, 5, stride=1)
    net = layers.conv2d(net, dim3, 1, stride=1, activation_fn=tf.sigmoid)
    net = layers.flatten(net) 

In inference, instead of

 with pt.defaults_scope(activation_fn=tf.nn.elu,
                               batch_normalize=True,
                               learned_moments_update_rate=0.0003,
                               variance_epsilon=0.001,
                               scale_after_normalization=True):

You can use arg scope:

arg_scope([layers.fully_connected, layers.conv2d], activation_fn=tf.nn.relu)

If anyone implements this stuff we can pull in the new version! Sorry again for not updating sooner!

timsainb avatar May 18 '17 19:05 timsainb

The issues I was having were related to the zeros_initializer constructor having been updated. The way I solved it was to simply replace it with None - it was only used for biases. I assume it's correct since I am able to train the network :)

Thanks for your response!

alexrakowski avatar May 18 '17 21:05 alexrakowski

Hi all, sorry I've been a bit swamped with research the past few months. I don't quite have time to rebuild and debug, but here are some quick updates you can implement. The examples here are from a different network with a slightly different architecture, so a few changes would be needed.

To get rid of pretty tensor in layer creation, replace the pt.wrap layers with tf.layers layers. For example:

def encoder(X):

    net = tf.reshape(X, [batch_size, dim1, dim2, dim3])
    net = layers.conv2d(net, 32, 5, stride=2)
    net = layers.conv2d(net, 64, 5, stride=2)
    net = layers.conv2d(net, 128, 5, stride=2)
    net= layers.conv2d(net, 256, 5, stride=2)
    net =  layers.flatten(net)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, hidden_size, activation_fn=None)
    
    return net

To use the resize deconvolutions discussed in here, replace deconv with something like this

def generator(Z):
    net = layers.fully_connected(Z, 4000)
    net = layers.fully_connected(net, 4000)
    net = tf.reshape(layers.fully_connected(net, 4*4*256), [batch_size, 4,4,256])
    net = tf.image.resize_nearest_neighbor(net, (8,8))
    net = layers.conv2d(net, 256, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (16,16))
    net = layers.conv2d(net, 128, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (32,32))
    net = layers.conv2d(net, 32, 5, stride=1)
    net = layers.conv2d(net, dim3, 1, stride=1, activation_fn=tf.sigmoid)
    net = layers.flatten(net) 

In inference, instead of

 with pt.defaults_scope(activation_fn=tf.nn.elu,
                               batch_normalize=True,
                               learned_moments_update_rate=0.0003,
                               variance_epsilon=0.001,
                               scale_after_normalization=True):

You can use arg scope:

arg_scope([layers.fully_connected, layers.conv2d], activation_fn=tf.nn.relu)

If anyone implements this stuff we can pull in the new version! Sorry again for not updating sooner!

Does anyone implement this arg_scope and is willing to share?

GloryyrolG avatar Jun 30 '21 02:06 GloryyrolG

Hi all, sorry I've been a bit swamped with research the past few months. I don't quite have time to rebuild and debug, but here are some quick updates you can implement. The examples here are from a different network with a slightly different architecture, so a few changes would be needed. To get rid of pretty tensor in layer creation, replace the pt.wrap layers with tf.layers layers. For example:

def encoder(X):

    net = tf.reshape(X, [batch_size, dim1, dim2, dim3])
    net = layers.conv2d(net, 32, 5, stride=2)
    net = layers.conv2d(net, 64, 5, stride=2)
    net = layers.conv2d(net, 128, 5, stride=2)
    net= layers.conv2d(net, 256, 5, stride=2)
    net =  layers.flatten(net)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, hidden_size, activation_fn=None)
    
    return net

To use the resize deconvolutions discussed in here, replace deconv with something like this

def generator(Z):
    net = layers.fully_connected(Z, 4000)
    net = layers.fully_connected(net, 4000)
    net = tf.reshape(layers.fully_connected(net, 4*4*256), [batch_size, 4,4,256])
    net = tf.image.resize_nearest_neighbor(net, (8,8))
    net = layers.conv2d(net, 256, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (16,16))
    net = layers.conv2d(net, 128, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (32,32))
    net = layers.conv2d(net, 32, 5, stride=1)
    net = layers.conv2d(net, dim3, 1, stride=1, activation_fn=tf.sigmoid)
    net = layers.flatten(net) 

In inference, instead of

 with pt.defaults_scope(activation_fn=tf.nn.elu,
                               batch_normalize=True,
                               learned_moments_update_rate=0.0003,
                               variance_epsilon=0.001,
                               scale_after_normalization=True):

You can use arg scope: arg_scope([layers.fully_connected, layers.conv2d], activation_fn=tf.nn.relu) If anyone implements this stuff we can pull in the new version! Sorry again for not updating sooner!

Does anyone implement this arg_scope and is willing to share?

sorry, have u solve this problem?

shine0318 avatar Nov 29 '22 08:11 shine0318