deep-learning-with-python-notebooks
deep-learning-with-python-notebooks copied to clipboard
Fix department label inconsistency (categorical vs sparse)
Title: Fix department label inconsistency in custom model example
Description: In one of the custom model examples, the department classification head is defined as:
self.department_classifier = layers.Dense(num_departments, activation="softmax")
with the model compiled using:
loss=["mean_squared_error", "categorical_crossentropy"]
This setup assumes one-hot encoded labels. However, the notebook currently generates labels using:
department_data = np.random.randint(0, num_departments, size=(num_samples, 1))
which produces integer indices, inconsistent with categorical_crossentropy.
Fix implemented in this PR:
- Replaced integer label generation with proper one-hot encoding using
keras.utils.to_categorical. - Added a comment showing that if integer labels are preferred, the correct loss is
sparse_categorical_crossentropy.
Why this matters:
- Keeps the model output, labels, and loss function consistent.
- Prevents shape errors or misleading results.
- Improves clarity for readers following the example.
Updated code:
# Option 1: one-hot encoded labels (for categorical_crossentropy)
department_data = keras.utils.to_categorical(
np.random.randint(0, num_departments, size=(num_samples,)),
num_classes=num_departments
)
# Option 2 (alternative): if you prefer integer labels,
# then switch the model compile loss to "sparse_categorical_crossentropy"
# department_data = np.random.randint(0, num_departments, size=(num_samples, 1))