finetune_alexnet_with_tensorflow icon indicating copy to clipboard operation
finetune_alexnet_with_tensorflow copied to clipboard

How to improve accuracy

Open Julius-ZCJ opened this issue 5 years ago • 1 comments

2019-05-28 10:45:45.833285 Validation Accuracy = 0.2188 2019-05-28 10:45:45.833390 Saving checkpoint of model... 2019-05-28 10:45:47.135384 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:45:47.464044 Start validation 2019-05-28 10:45:50.295000 Validation Accuracy = 0.2188 2019-05-28 10:45:50.295116 Saving checkpoint of model... 2019-05-28 10:45:51.710815 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:45:52.035434 Start validation 2019-05-28 10:45:54.884074 Validation Accuracy = 0.2188 2019-05-28 10:45:54.884180 Saving checkpoint of model... 2019-05-28 10:45:56.231970 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:45:56.560833 Start validation 2019-05-28 10:45:59.407273 Validation Accuracy = 0.2188 2019-05-28 10:45:59.407380 Saving checkpoint of model... 2019-05-28 10:46:01.239952 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:46:01.566239 Start validation 2019-05-28 10:46:04.391418 Validation Accuracy = 0.2188 2019-05-28 10:46:04.391538 Saving checkpoint of model... 2019-05-28 10:46:05.695854 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:46:06.340697 Start validation 2019-05-28 10:46:09.281800 Validation Accuracy = 0.2188 2019-05-28 10:46:09.281909 Saving checkpoint of model... 2019-05-28 10:46:10.555316 Model checkpoint saved at checkpoints/model_epoch1.ckpt 2019-05-28 10:46:10.904200 Start validation 2019-05-28 10:46:13.926193 Validation Accuracy = 0.2188 2019-05-28 10:46:13.926291 Saving checkpoint of model...

As you see,accuracy is 0.2188 and not change.What can I do to fit this condition

Julius-ZCJ avatar May 28 '19 03:05 Julius-ZCJ

I discard this function 'load_initial_weights', because i think w and b in net have init ,such as code:

def conv(self,x, filter_height, filter_width, num_filters, stride_y, stride_x, name,padding='SAME', groups=1):

    # Get number of input channels
    input_channels = int(x.get_shape()[-1])
    
    # Create lambda function for the convolution
    convolve = lambda i, k: tf.nn.conv2d(i, k,
                                     strides=[1, stride_y, stride_x, 1],
                                     padding=padding)
    with tf.variable_scope(name) as scope:
        # Create tf variables for the weights and biases of the conv layer
        w=tf.random_normal_initializer(mean=0.0, stddev=0.001, seed=None, dtype=tf.float32) 
        #b=tf.tf.constant_initializer(value)
        weights = tf.get_variable('weights', shape=[filter_height,
                                                filter_width,
                                                input_channels/groups,
                                                num_filters],
                                                initializer=w,
                                                trainable=True)
        biases = tf.get_variable('biases', shape=[num_filters],initializer=tf.ones_initializer(),trainable=True)
        
        
    if groups == 1:
        conv = convolve(x, weights)
    else:
         # Split input and weights and convolve them separately
         input_groups = tf.split(axis=3, num_or_size_splits=groups, value=x)
         weight_groups = tf.split(axis=3, num_or_size_splits=groups,
                             value=weights)
         output_groups = [convolve(i, k) for i, k in zip(input_groups, weight_groups)]

        # Concat the convolved output together again
         conv = tf.concat(axis=3, values=output_groups)#拼接张量
    # Add biases
    bias = tf.reshape(tf.nn.bias_add(conv, biases), tf.shape(conv))     
    # Apply relu function
    relu = tf.nn.relu(bias, name=scope.name)    
    return relu

if i delete function 'load_initial_weights', Have any influence on net?

Julius-ZCJ avatar May 28 '19 03:05 Julius-ZCJ