kk

Results 41 comments of kk

Your response also helps me.

I see that the code uses _rescale = tf.get_variable("rescale", [], initializer=tf.constant_initializer(1.)) scale_shift = tf.get_variable("scale_shift", [], initializer=tf.constant_initializer(0.)) logsd = tf.tanh(logsd) * rescale + scale_shift_ for mean and logstd. Is that necessary?

Yes, I meet this problem too. Thank you.

> @Abdelpakey ok, of course you can. This is my training acc and loss curve: > ![densenet_acc](https://user-images.githubusercontent.com/32693588/39976985-7270757a-5769-11e8-8f3e-2c27bfa97712.png) > > ![densenet_loss](https://user-images.githubusercontent.com/32693588/39976885-cf770d5c-5768-11e8-858e-4ed488a102f6.png) How many densenet blocks do you use? How about the...

> @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" Why can't I? I use tensorflow1.10, cuda 9.0,...

> > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > > > > > >...

> > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > > > >...

> > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > > > >...

> > > > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > >...

> > > > > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" >...