supervisely-tutorials icon indicating copy to clipboard operation
supervisely-tutorials copied to clipboard

concept y_pred[:, 2:, :] and downsample_factor ?

Open PythonImageDeveloper opened this issue 5 years ago • 0 comments

Hi Q1 - I don't understand why don't use all of y_pred[ :,:,:]? why use y_pred[ :, 2:,:]? why don't use 0,1 dims?

def ctc_lambda_func(args):
    y_pred, labels, input_length, label_length = args
    # the 2 is critical here since the first couple outputs of the RNN
    # tend to be garbage:
    y_pred = y_pred[:, 2:, :]
    return K.ctc_batch_cost(labels, y_pred, input_length, label_length)

Q2 - What's meaning the downsampling parameter? if we change the input_size, is necessary to change the value of downsampling parameter? what's the principle of the parameter? and I also don't understand, why use this line and why we multiply (self.img_w // self.downsample_factor - 2) to np.ones((self.batch_size, 1)) ? what's the advantage of this work? if we use only np.ones((self.batch_size, 1)) , Causing the problem? and why use (self.downsample_factor - 2) ? why 2 ?

PythonImageDeveloper avatar May 06 '19 08:05 PythonImageDeveloper