ncc
ncc copied to clipboard
question about predict value p at train_task_classifyapp.py line 417
p = model.predict_gen(generator=gen_test)[0]
In line 417, p is identified as the first element of the model.predict_gen () return value.
In my understanding, model.predict_gen () should return the list P_1 of prediction results, and the program does the same.
def predict_gen(self, generator: EmbeddingSequence) -> np.array:
...
return [i + 1 for i in indices]
Then, can we use P_1 and y_test to calculate the accuracy?
Why use p instead of P_1 in this procedure?
accuracy = p == y_test
return accuracy
classifyapp_accuracy = evaluate(NCC_classifyapp(), embeddings, folder_data, train_samples, folder_results, dense_layer_size, print_summary, num_epochs, batch_size)
print('\nTest accuracy:', sum(classifyapp_accuracy)*100/len(classifyapp_accuracy), '%')
Thanks for reporting this! This may be an important bug that was introduced in refactoring.
Hello! @tbennun @wsl071134
I have exactly the same question. p = model.predict_gen(generator=gen_test)
seems to subsume all predication results across 104 labels, while p = model.predict_gen(generator=gen_test)[0]
only contains the predication of the first label? Then why only measure the first label? Thank you!