kk
kk
Your response also helps me.
I see that the code uses _rescale = tf.get_variable("rescale", [], initializer=tf.constant_initializer(1.)) scale_shift = tf.get_variable("scale_shift", [], initializer=tf.constant_initializer(0.)) logsd = tf.tanh(logsd) * rescale + scale_shift_ for mean and logstd. Is that necessary?
Yes, I meet this problem too. Thank you.
> @Abdelpakey ok, of course you can. This is my training acc and loss curve: >  > >  How many densenet blocks do you use? How about the...
> @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" Why can't I? I use tensorflow1.10, cuda 9.0,...
> > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > > > > > >...
> > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > > > >...
> > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > > > >...
> > > > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" > >...
> > > > > > > > @yuffon It's all the same with the original code, the only part I change is "x = conv_layer(x, filter=in_channel*0.5, kernel=[1,1], layer_name=scope+'_conv1')" >...