model-analysis icon indicating copy to clipboard operation
model-analysis copied to clipboard

Renaming Custom Layer breaks TFMA Evaluator

Open abbyDC opened this issue 2 years ago • 6 comments

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow Model Analysis): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): GCP AI Vertex Workbench Debian 10
  • TensorFlow Model Analysis installed from (source or binary): PyPI binary
  • TensorFlow Model Analysis version (use command below): 0.26.0
  • Python version: 3.7
  • Jupyter Notebook version: n/a
  • Exact command to reproduce:

You can obtain the TensorFlow Model Analysis version with

python -c "import tensorflow_model_analysis as tfma; print(tfma.version.VERSION)"

Describe the problem

I have a custom layer named MultiHeadAttention layer and when I ran the tfx pipeline, it shows a warning that it has a conflict with the default MultiHeadAttention layer and that I should rename the layer something else. When I rename it to CustomMultiHeadAttention, it suddenly breaks the tfx pipeline particularly in the evaluator component. When I don't change anything else in the code except reverting it back to the name "MultiHeadAttention" the evaluator component runs okay but the problem is that when trying to export the model/saving and loading, I'm also having some problems. What is the cause of this or is it a bug in tfma/tfx?

Source code / logs

Error when changing Custom Layer name from MultiHeadAttention -> CustomMultiHeadAttention Screenshot from 2022-04-07 10-48-31

eval_config.py

import tensorflow_model_analysis as tfma

def set_eval_config() -> tfma.EvalConfig:

    eval_config = tfma.EvalConfig(
        model_specs=[
            tfma.ModelSpec(
                name="accent_model",
                signature_name="serving_evaluator",
                label_key="accent",
                prediction_key="accent_prediction",
            ),
            tfma.ModelSpec(
                name="phones_model",
                signature_name="serving_evaluator",
                label_key="target_phones",
                prediction_key="phone_predictions",
            ),
        ],
        metrics_specs=[
            tfma.MetricsSpec(
                output_names=["accent_prediction"],
                model_names=["accent_model"],
                metrics=[
                    tfma.MetricConfig(
                        class_name="AccentAccuracy",
                        module="aped.mlops.pipeline.metrics",
                    ),
                ],
            ),
            tfma.MetricsSpec(
                output_names=["phone_predictions"],
                model_names=["phones_model"],
                metrics=[
                    tfma.MetricConfig(
                        class_name="PhoneASRAccuracy",
                        module="aped.mlops.pipeline.metrics",
                        threshold=tfma.MetricThreshold(
                            value_threshold=tfma.GenericValueThreshold(lower_bound={"value": 0.01}),
                            change_threshold=tfma.GenericChangeThreshold(
                                direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                                absolute={"value": -1e-10},
                            ),
                        ),
                    ),
                    tfma.MetricConfig(
                        class_name="PhoneErrorRate",
                        module="aped.mlops.pipeline.metrics",
                    ),
                    tfma.MetricConfig(
                        class_name="PhonesPrecision",
                        module="aped.mlops.pipeline.metrics",
                    ),
                    tfma.MetricConfig(
                        class_name="PhonesRecall",
                        module="aped.mlops.pipeline.metrics",
                    ),
                    tfma.MetricConfig(
                        class_name="PhonesF1Score",
                        module="aped.mlops.pipeline.metrics",
                    ),
                    tfma.MetricConfig(class_name="ExampleCount"),
                    tfma.MetricConfig(class_name="SparseCategoricalCrossentropy"),
                ],
            ),
        ],
        slicing_specs=[
            tfma.SlicingSpec(),
            tfma.SlicingSpec(feature_keys=["accent"]),
            tfma.SlicingSpec(feature_keys=["recording_length"]),
            tfma.SlicingSpec(feature_keys=["age"]),
            tfma.SlicingSpec(feature_keys=["gender"]),
            tfma.SlicingSpec(feature_keys=["bg_noise_type"]),
            tfma.SlicingSpec(feature_keys=["bg_noise_level"]),
            tfma.SlicingSpec(feature_keys=["english_level"]),
        ],
    )

    return eval_config

code snippet for evaluator component in tfx pipeline

evaluator = tfx.components.Evaluator(
        examples=transform.outputs["transformed_examples"],
        model=trainer.outputs["model"],
        # baseline_model=model_resolver.outputs['model'],
        eval_config=eval_config,
        example_splits=["eval"],
    )

multihead attention layer declaration snippet

class MultiHeadAttention(tf.keras.layers.Layer):
    """MultiHeadAttention Custom Layer"""

    def __init__(self, d_model: int, num_heads: int, dropout_rate: float, mixed_precision: bool = False) -> None:
        """Initialise the MultiHeadAttention Layer

        Args:
            d_model (int): Attention  modelling  dimension
            num_heads (int): Number of attention heads
            mixed_precision (bool, optional): True if the layer needs to handle mixed precision
            with float16. Defaults to False
        """
        super().__init__()
        self.num_heads = num_heads
        self.d_model = d_model
        self.dropout_rate = dropout_rate
        self.mixed_precision = mixed_precision

        assert d_model % self.num_heads == 0

        self.depth = d_model // self.num_heads

        init = tf.keras.initializers.RandomNormal(mean=0, stddev=np.sqrt(2.0 / (d_model + self.depth)))

        self.wq = tf.keras.layers.Dense(d_model, kernel_initializer=init)
        self.wk = tf.keras.layers.Dense(d_model, kernel_initializer=init)
        self.wv = tf.keras.layers.Dense(d_model, kernel_initializer=init)

        self.dense = tf.keras.layers.Dense(d_model, kernel_initializer="glorot_normal")

abbyDC avatar Apr 10 '22 03:04 abbyDC

Hi @abbyDC Can you take a look at the workaround proposed in this link and see if it helps in resolving your issue? Also you can refer to TFMA Evaluator, Hope this helps. Thanks!

pindinagesh avatar Apr 13 '22 13:04 pindinagesh

Hi @abbyDC Can you take a look at the workaround proposed in this link and see if it helps in resolving your issue? Also you can refer to TFMA Evaluator, Hope this helps. Thanks!

Hi @pindinagesh! The link you attached doesn't show anything on my end when i click on it. May I ask for the working link for this so I can take a look at it? Thanks! :) Screenshot from 2022-04-18 11-44-11 ks!

abbyDC avatar Apr 18 '22 03:04 abbyDC

Sorry for the inconvenience, I have updated it again, Could you please check it?

pindinagesh avatar Apr 18 '22 05:04 pindinagesh

Sorry for the inconvenience, I have updated it again, Could you please check it?

Hi yup the link works now. Will take a look at the post first to check which of the workarounds I have already tried

abbyDC avatar Apr 19 '22 03:04 abbyDC

Hi @abbyDC

Could you please tell us the status of this issue?

pindinagesh avatar Apr 27 '22 08:04 pindinagesh

Hi @abbyDC

Could you please tell us the status of this issue?

Hello! Upon further investigation and experimentations, the problem still looks the same for me. Several things I've tried similar to the issue above:

  1. Adding "serving_raw" in output signature - it has already been implemented in my code as "serving_evaluator" with these lines but I still get the same error
def _get_tf_examples_serving_signature(model, tf_transform_output):
    """Returns a serving signature that accepts `tensorflow.Example`."""

    @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string, name="examples")])
    def serve_tf_examples_fn(serialized_tf_example):
        """Returns the output to be used in the serving signature."""

        transformed_specs = tf_transform_output.transformed_feature_spec()
        transformed_features = tf.io.parse_example(serialized_tf_example, transformed_specs)
        transformed_features["audio"] = tf.sparse.to_dense(transformed_features["audio"])
        transformed_features["target_phones"] = tf.sparse.to_dense(transformed_features["target_phones"])

        audio = transformed_features["audio"]
        labels = transformed_features["target_phones"]
        outputs = model((audio, labels))
        return outputs

    return serve_tf_examples_fn

  signatures = {
      "serving_default": default_signature,
      "serving_evaluator": _get_tf_examples_serving_signature(model, tf_transform_output),
  }
  1. I have tried using both "examples=example_gen.outputs['examples']" and "examples=transform.outputs['transformed_examples']" as input to the evaluator but is no difference when I run the pipeline

abbyDC avatar Apr 28 '22 02:04 abbyDC