models icon indicating copy to clipboard operation
models copied to clipboard

Reduce running time of unit tests

Open oliverholworthy opened this issue 3 years ago • 1 comments

Goals :soccer:

  • Improve iteration speed by reducing the running time of unit tests

Implementation Details :construction:

Testing Details :mag:

oliverholworthy avatar Oct 24 '22 13:10 oliverholworthy

Click to view CI Results
GitHub pull request #820 of commit daa1f67cc21b51616976ae6162df2d2ba2ab05a5, no merge conflicts.
Running as SYSTEM
Setting status of daa1f67cc21b51616976ae6162df2d2ba2ab05a5 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1566/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/820/*:refs/remotes/origin/pr/820/* # timeout=10
 > git rev-parse daa1f67cc21b51616976ae6162df2d2ba2ab05a5^{commit} # timeout=10
Checking out Revision daa1f67cc21b51616976ae6162df2d2ba2ab05a5 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f daa1f67cc21b51616976ae6162df2d2ba2ab05a5 # timeout=10
Commit message: "Reduce size of int domains in testing dataset schema"
 > git rev-list --no-walk 81963b87cf7771d776d3dab9bf5613264440a57b # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins12829560077345759512.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 775 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 8%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 21%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 23%] tests/unit/tf/core/test_aggregation.py ......... [ 24%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 27%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py F.. [ 27%] tests/unit/tf/core/test_prediction.py .. [ 28%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 29%] tests/unit/tf/examples/test_02_dataschema.py . [ 29%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 30%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 30%] tests/unit/tf/inputs/test_continuous.py ..... [ 30%] tests/unit/tf/inputs/test_embedding.py ...............................FF [ 35%] FF.F... [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s.................F..... [ 49%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ..........F.F.FF.FF..........FF. [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 62%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 70%] tests/unit/tf/transformers/test_transforms.py ...... [ 70%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s............................. [ 74%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 82%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 85%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py ......F....... [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 90%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== _______________________________ test_topk_index ________________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f2221021dc0>

def test_topk_index(ecommerce_data: Dataset):
    import tensorflow as tf

    from merlin.models.tf.metrics.evaluation import ItemCoverageAt, PopularityBiasAt

    model: mm.RetrievalModel = mm.TwoTowerModel(
        ecommerce_data.schema, query_tower=mm.MLPBlock([64, 128])
    )
    model.compile(run_eagerly=False, optimizer="adam")
    model.fit(ecommerce_data, epochs=1, batch_size=50)

    item_features = ecommerce_data.schema.select_by_tag(Tags.ITEM).column_names
    item_dataset = ecommerce_data.to_ddf()[item_features].drop_duplicates().compute()
    item_dataset = Dataset(item_dataset)
    recommender = model.to_top_k_recommender(item_dataset, k=20)
    NUM_ITEMS = 1001
    item_frequency = tf.sort(
        tf.random.uniform((NUM_ITEMS,), minval=0, maxval=NUM_ITEMS, dtype=tf.int32)
    )
    eval_metrics = [
        PopularityBiasAt(item_freq_probs=item_frequency, is_prob_distribution=False, k=10),
        ItemCoverageAt(num_unique_items=NUM_ITEMS, k=10),
    ]
    batch = mm.sample_batch(ecommerce_data, batch_size=10, include_targets=False)
  _, top_indices = recommender(batch)

tests/unit/tf/core/test_index.py:48:


merlin/models/config/schema.py:58: in call return super().call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call return super().call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler raise e /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler return fn(*args, **kwargs) merlin/models/tf/models/base.py:88: in call outputs = call_layer(self.block, inputs, **kwargs) merlin/models/tf/utils/tf_utils.py:433: in call_layer return layer(inputs, *args, **filtered_kwargs) merlin/models/config/schema.py:58: in call return super().call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler raise e /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler return fn(*args, **kwargs) merlin/models/tf/core/combinators.py:269: in call return call_sequentially(self.layers, inputs, training=training, **kwargs) merlin/models/tf/core/combinators.py:819: in call_sequentially outputs = call_layer(layer, outputs, **kwargs) merlin/models/tf/utils/tf_utils.py:433: in call_layer return layer(inputs, *args, **filtered_kwargs) merlin/models/config/schema.py:58: in call return super().call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler raise new_e.with_traceback(e.traceback) from None /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler return fn(*args, **kwargs) merlin/models/tf/core/index.py:229: in call top_scores, top_indices = tf.math.top_k(scores, k=k) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler return dispatch_target(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/nn_ops.py:5759: in top_k return gen_nn_ops.top_kv2(input, k=k, sorted=sorted, name=name) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py:11504: in top_kv2 _ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling layer "top_k_index_block" (type TopKIndexBlock). E
E input must have at least k columns. Had 9, needed 20 [Op:TopKV2] E
E Call arguments received by layer "top_k_index_block" (type TopKIndexBlock): E • inputs=tf.Tensor(shape=(10, 128), dtype=float32) E • k=None E • kwargs={'training': 'False', 'features': {'user_categories': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_shops': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_brands': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_intentions': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_profile': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_group': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_gender': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_age': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_consumption_1': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_consumption_2': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_is_occupied': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_geography': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'item_shop': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'item_intention': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'item_brand': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'item_id': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_item_categories': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_item_shops': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_item_brands': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'user_item_intentions': 'tf.Tensor(shape=(10, 1), dtype=int64)', 'position': 'tf.Tensor(shape=(10, 1), dtype=int64)'}}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/2 [==============>...............] - ETA: 3s - loss: 2.5887 - recall_at_10: 0.9000 - mrr_at_10: 0.2343 - ndcg_at_10: 0.3916 - map_at_10: 0.2343 - precision_at_10: 0.0900 - regularization_loss: 0.0000e+00 2/2 [==============================] - 4s 18ms/step - loss: 2.6354 - recall_at_10: 0.8697 - mrr_at_10: 0.4057 - ndcg_at_10: 0.5154 - map_at_10: 0.4057 - precision_at_10: 0.0870 - regularization_loss: 0.0000e+00 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_1_layer_call_fn while saving (showing 5 of 58). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. _____________________ test_embeddings_with_regularization ______________________

testing_data = <merlin.io.dataset.Dataset object at 0x7f2211de03d0>

def test_embeddings_with_regularization(testing_data: Dataset):
    schema = testing_data.schema.select_by_tag(Tags.ITEM_ID)
    dim = 16
    embeddings_wo_reg = mm.Embeddings(schema, dim=dim)
    embeddings_batch_reg = mm.Embeddings(schema, dim=dim, l2_batch_regularization_factor=0.2)
    embeddings_table_reg = mm.Embeddings(
        schema, dim=dim, embeddings_regularizer=tf.keras.regularizers.L2(0.2)
    )

    inputs = mm.sample_batch(testing_data, batch_size=100, include_targets=False)
    _ = embeddings_wo_reg(inputs)
    _ = embeddings_batch_reg(inputs)
    _ = embeddings_table_reg(inputs)

    assert not embeddings_wo_reg.losses
    assert embeddings_batch_reg.losses[0] > 0
  tf.debugging.assert_greater(embeddings_table_reg.losses[0], embeddings_batch_reg.losses[0])

tests/unit/tf/inputs/test_embedding.py:474:


/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler return dispatch_target(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/check_ops.py:1037: in assert_greater_v2 return assert_greater(x=x, y=y, summarize=summarize, message=message, /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler return dispatch_target(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/check_ops.py:1045: in assert_greater return _binary_assert('>', 'assert_greater', math_ops.greater, np.greater, x,


sym = '>', opname = 'assert_greater' op_func = <function greater at 0x7f24e406c310>, static_func = <ufunc 'greater'> x = <tf.Tensor: shape=(), dtype=float32, numpy=0.13445877> y = <tf.Tensor: shape=(), dtype=float32, numpy=0.23662412> data = ['Condition x > y did not hold.', 'First 1 elements of x:', array([0.13445877], dtype=float32), 'First 1 elements of y:', array([0.23662412], dtype=float32)] summarize = 3, message = None, name = None

def _binary_assert(sym, opname, op_func, static_func, x, y, data, summarize,
                   message, name):
  """Generic binary elementwise assertion.

  Implements the behavior described in _binary_assert_doc() above.
  Args:
    sym: Mathematical symbol for the test to apply to pairs of tensor elements,
      i.e. "=="
    opname: Name of the assert op in the public API, i.e. "assert_equal"
    op_func: Function that, if passed the two Tensor inputs to the assertion (x
      and y), will return the test to be passed to reduce_all() i.e.
    static_func: Function that, if passed numpy ndarray versions of the two
      inputs to the assertion, will return a Boolean ndarray with containing
      True in all positions where the assertion PASSES.
      i.e. np.equal for assert_equal()
    x:  Numeric `Tensor`.
    y:  Numeric `Tensor`, same dtype as and broadcastable to `x`.
    data:  The tensors to print out if the condition is False.  Defaults to
      error message and first few entries of `x`, `y`.
    summarize: Print this many entries of each tensor.
    message: A string to prefix to the default message.
    name: A name for this operation (optional).  Defaults to the value of
      `opname`.

  Returns:
    See docstring template in _binary_assert_doc().
  """
  with ops.name_scope(name, opname, [x, y, data]):
    x = ops.convert_to_tensor(x, name='x')
    y = ops.convert_to_tensor(y, name='y')

    if context.executing_eagerly():
      test_op = op_func(x, y)
      condition = math_ops.reduce_all(test_op)
      if condition:
        return

      # If we get here, the assertion has failed.
      # Default to printing 3 elements like control_flow_ops.Assert (used
      # by graph mode) does. Also treat negative values as "print
      # everything" for consistency with Tensor::SummarizeValue().
      if summarize is None:
        summarize = 3
      elif summarize < 0:
        summarize = 1e9  # Code below will find exact size of x and y.

      if data is None:
        data = _make_assert_msg_data(sym, x, y, summarize, test_op)

      if message is not None:
        data = [message] + list(data)
    raise errors.InvalidArgumentError(
          node_def=None,
          op=None,
          message=('\n'.join(_pretty_print(d, summarize) for d in data)))

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Condition x > y did not hold. E First 1 elements of x: E [0.13445877] E First 1 elements of y: E [0.23662412]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/check_ops.py:407: InvalidArgumentError ___________ test_embedding_features_yoochoose_infer_embedding_sizes ____________

testing_data = <merlin.io.dataset.Dataset object at 0x7f221224e880>

def test_embedding_features_yoochoose_infer_embedding_sizes(testing_data: Dataset):
    schema = testing_data.schema.select_by_tag(Tags.CATEGORICAL)

    emb_module = mm.EmbeddingFeatures.from_schema(
        schema,
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True, infer_embedding_sizes_multiplier=3.0
        ),
    )

    embeddings = emb_module(mm.sample_batch(testing_data, batch_size=100, include_targets=False))
  assert (
        emb_module.embedding_tables["user_id"].embeddings.shape[1]
        == embeddings["user_id"].shape[1]
        == 20
    )

E assert 7 == 20

tests/unit/tf/inputs/test_embedding.py:489: AssertionError ______ test_embedding_features_yoochoose_infer_embedding_sizes_multiple_8 ______

testing_data = <merlin.io.dataset.Dataset object at 0x7f2212d5e640>

def test_embedding_features_yoochoose_infer_embedding_sizes_multiple_8(testing_data: Dataset):
    schema = testing_data.schema.select_by_tag(Tags.CATEGORICAL)

    emb_module = mm.EmbeddingFeatures.from_schema(
        schema,
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
        ),
    )

    embeddings = emb_module(mm.sample_batch(testing_data, batch_size=100, include_targets=False))
  assert (
        emb_module.embedding_tables["user_id"].embeddings.shape[1]
        == embeddings["user_id"].shape[1]
        == 24
    )

E assert 8 == 24

tests/unit/tf/inputs/test_embedding.py:525: AssertionError ______ test_embedding_features_yoochoose_partially_infer_embedding_sizes _______

testing_data = <merlin.io.dataset.Dataset object at 0x7f221af735e0>

def test_embedding_features_yoochoose_partially_infer_embedding_sizes(testing_data: Dataset):
    schema = testing_data.schema.select_by_tag(Tags.CATEGORICAL)

    emb_module = mm.EmbeddingFeatures.from_schema(
        schema,
        embedding_options=mm.EmbeddingOptions(
            embedding_dims={"user_id": 50, "user_country": 100},
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
        ),
    )

    embeddings = emb_module(mm.sample_batch(testing_data, batch_size=100, include_targets=False))

    assert (
        emb_module.embedding_tables["user_id"].embeddings.shape[1]
        == embeddings["user_id"].shape[1]
        == 50
    )
    assert (
        emb_module.embedding_tables["user_country"].embeddings.shape[1]
        == embeddings["user_country"].shape[1]
        == 100
    )
  assert (
        emb_module.embedding_tables["item_id"].embeddings.shape[1]
        == embeddings["item_id"].shape[1]
        == 46
    )

E assert 9 == 46

tests/unit/tf/inputs/test_embedding.py:571: AssertionError ___________ test_embedding_features_yoochoose_pretrained_initializer ___________

testing_data = <merlin.io.dataset.Dataset object at 0x7f22211d85b0>

def test_embedding_features_yoochoose_pretrained_initializer(testing_data: Dataset):
    schema = testing_data.schema.select_by_tag(Tags.CATEGORICAL)

    pretrained_emb_item_ids = np.random.random((51997, 64))
    pretrained_emb_categories = np.random.random((332, 64))

    emb_module = mm.EmbeddingFeatures.from_schema(
        schema,
        embedding_options=mm.EmbeddingOptions(
            embeddings_initializers={
                "item_id": mm.TensorInitializer(pretrained_emb_item_ids),
                "categories": mm.TensorInitializer(pretrained_emb_categories),
            },
        ),
    )

    # Calling the first batch, so that embedding tables are build
  _ = emb_module(mm.sample_batch(testing_data, batch_size=10, include_targets=False))

tests/unit/tf/inputs/test_embedding.py:641:


merlin/models/tf/core/tabular.py:490: in _tabular_call outputs = self.super().call(inputs, *args, **kwargs) # type: ignore merlin/models/config/schema.py:58: in call return super().call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call self._maybe_build(inputs) /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build self.build(input_shapes) # pylint:disable=not-callable merlin/models/tf/inputs/embedding.py:854: in build embedding_table.build(()) /usr/local/lib/python3.8/dist-packages/keras/utils/tf_utils.py:338: in wrapper output_shape = fn(instance, input_shape) /usr/local/lib/python3.8/dist-packages/keras/layers/core/embedding.py:157: in build self.embeddings = self.add_weight( /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:665: in add_weight variable = self._add_variable_with_custom_getter( /usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py:873: in _add_variable_with_custom_getter new_variable = getter( /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer_utils.py:126: in make_variable return tf.compat.v1.Variable( /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py:264: in call return cls._variable_v1_call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py:209: in _variable_v1_call return previous_getter( /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py:202: in previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py:2705: in default_variable_creator return resource_variable_ops.ResourceVariable( /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py:268: in call return super(VariableMetaclass, cls).call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1630: in init self._init_from_args( /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1783: in _init_from_args initial_value = initial_value() merlin/models/tf/utils/tf_utils.py:396: in call tf.assert_equal(shape, self._weights.shape) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler return dispatch_target(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/check_ops.py:713: in assert_equal_v2 return assert_equal(x=x, y=y, summarize=summarize, message=message, name=name) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler return dispatch_target(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/check_ops.py:724: in assert_equal return _binary_assert('==', 'assert_equal', math_ops.equal, np.equal, x, y,


sym = '==', opname = 'assert_equal' op_func = <function equal at 0x7f24e1b15ca0>, static_func = <ufunc 'equal'> x = <tf.Tensor: shape=(2,), dtype=int32, numpy=array([52, 64], dtype=int32)> y = <tf.Tensor: shape=(2,), dtype=int32, numpy=array([51997, 64], dtype=int32)> data = ['Condition x == y did not hold.', 'Indices of first 1 different values:', array([[0]]), 'Corresponding x values:', array([52], dtype=int32), 'Corresponding y values:', ...] summarize = 3, message = None, name = None

def _binary_assert(sym, opname, op_func, static_func, x, y, data, summarize,
                   message, name):
  """Generic binary elementwise assertion.

  Implements the behavior described in _binary_assert_doc() above.
  Args:
    sym: Mathematical symbol for the test to apply to pairs of tensor elements,
      i.e. "=="
    opname: Name of the assert op in the public API, i.e. "assert_equal"
    op_func: Function that, if passed the two Tensor inputs to the assertion (x
      and y), will return the test to be passed to reduce_all() i.e.
    static_func: Function that, if passed numpy ndarray versions of the two
      inputs to the assertion, will return a Boolean ndarray with containing
      True in all positions where the assertion PASSES.
      i.e. np.equal for assert_equal()
    x:  Numeric `Tensor`.
    y:  Numeric `Tensor`, same dtype as and broadcastable to `x`.
    data:  The tensors to print out if the condition is False.  Defaults to
      error message and first few entries of `x`, `y`.
    summarize: Print this many entries of each tensor.
    message: A string to prefix to the default message.
    name: A name for this operation (optional).  Defaults to the value of
      `opname`.

  Returns:
    See docstring template in _binary_assert_doc().
  """
  with ops.name_scope(name, opname, [x, y, data]):
    x = ops.convert_to_tensor(x, name='x')
    y = ops.convert_to_tensor(y, name='y')

    if context.executing_eagerly():
      test_op = op_func(x, y)
      condition = math_ops.reduce_all(test_op)
      if condition:
        return

      # If we get here, the assertion has failed.
      # Default to printing 3 elements like control_flow_ops.Assert (used
      # by graph mode) does. Also treat negative values as "print
      # everything" for consistency with Tensor::SummarizeValue().
      if summarize is None:
        summarize = 3
      elif summarize < 0:
        summarize = 1e9  # Code below will find exact size of x and y.

      if data is None:
        data = _make_assert_msg_data(sym, x, y, summarize, test_op)

      if message is not None:
        data = [message] + list(data)
    raise errors.InvalidArgumentError(
          node_def=None,
          op=None,
          message=('\n'.join(_pretty_print(d, summarize) for d in data)))

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Condition x == y did not hold. E Indices of first 1 different values: E [[0]] E Corresponding x values: E [52] E Corresponding y values: E [51997] E First 2 elements of x: E [52 64] E First 2 elements of y: E [51997 64]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/check_ops.py:407: InvalidArgumentError __________________________ test_retrieval_model_query __________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f2230298bb0> run_eagerly = True

def test_retrieval_model_query(ecommerce_data: Dataset, run_eagerly=True):
    query = ecommerce_data.schema.select_by_tag(Tags.USER_ID)
    candidate = ecommerce_data.schema.select_by_tag(Tags.ITEM_ID)

    loader = mm.Loader(
        ecommerce_data, batch_size=50, transform=mm.ToTarget(ecommerce_data.schema, Tags.ITEM_ID)
    )

    model = mm.RetrievalModelV2(
        query=mm.EmbeddingEncoder(query, dim=8),
        output=mm.ContrastiveOutput(candidate, "in-batch"),
    )

    model, _ = testing_utils.model_test(model, loader, reload_model=True, run_eagerly=run_eagerly)

    assert isinstance(model.query_encoder, mm.EmbeddingEncoder)
    assert isinstance(model.last, mm.ContrastiveOutput)

    queries = model.query_embeddings().compute()
  _check_embeddings(queries, 1001)

tests/unit/tf/models/test_base.py:694:


embeddings = 0 1 2 ... 5 6 7 0 -0.018745 0.020816 0.017914 ... 0.037717 0...0.022089 0.043691 0.015450 10 -0.024256 0.047961 -0.031460 ... 0.001377 0.042455 0.028802

[11 rows x 8 columns] extected_len = 1001, index_name = None

def _check_embeddings(embeddings, extected_len, index_name=None):
    if not isinstance(embeddings, pd.DataFrame):
        embeddings = embeddings.to_pandas()

    assert isinstance(embeddings, pd.DataFrame)
    assert list(embeddings.columns) == [str(i) for i in range(8)]
  assert len(embeddings.index) == extected_len

E assert 11 == 1001 E + where 11 = len(RangeIndex(start=0, stop=11, step=1)) E + where RangeIndex(start=0, stop=11, step=1) = 0 1 2 ... 5 6 7\n0 -0.018745 0.020816 0.017914 ... 0.037717 0...0.022089 0.043691 0.015450\n10 -0.024256 0.047961 -0.031460 ... 0.001377 0.042455 0.028802\n\n[11 rows x 8 columns].index

tests/unit/tf/models/test_base.py:729: AssertionError ----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 2.8820 - recall_at_10: 0.8600 - mrr_at_10: 0.1934 - ndcg_at_10: 0.3365 - map_at_10: 0.1934 - precision_at_10: 0.0860 - regularization_loss: 0.0000e+00 1/1 [==============================] - 1s 867ms/step - loss: 2.8820 - recall_at_10: 0.8600 - mrr_at_10: 0.1934 - ndcg_at_10: 0.3365 - map_at_10: 0.1934 - precision_at_10: 0.0860 - regularization_loss: 0.0000e+00 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:Gradients do not exist for variables ['retrieval_model_v2/bias:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument? WARNING:tensorflow:Gradients do not exist for variables ['retrieval_model_v2/bias:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument? ------------------------------ Captured log call ------------------------------- WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['retrieval_model_v2/bias:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument? WARNING absl:save.py:233 Found untraced functions such as train_compute_metrics, model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, categorical_target_layer_call_fn, categorical_target_layer_call_and_return_conditional_losses while saving (showing 5 of 9). These functions will not be directly callable after loading. WARNING tensorflow:utils.py:76 Gradients do not exist for variables ['retrieval_model_v2/bias:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument? _ test_two_tower_model_with_custom_options[categorical_crossentropy-False-False] _

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f22131d2220> run_eagerly = False, logits_pop_logq_correction = False loss = 'categorical_crossentropy'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)

    losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)
    assert len(losses.epoch) == 1
    assert all(measure >= 0 for metric in losses.history for measure in losses.history[metric])
  metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1)

tests/unit/tf/models/test_retrieval.py:183:


merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:986: in _call return self._concrete_stateful_fn._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:497: in call outputs = execute.execute(


op_name = '__inference_test_function_700105', num_outputs = 7 inputs = [<tf.Tensor: shape=(), dtype=resource, value=<ResourceHandle(name="Resource-12275-at-0x66690250", device="/job:localho...ce:GPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [8,2] ]")>>, ...] attrs = ('executor_type', '', 'config_proto', b'\n\x07\n\x03CPU\x10\x01\n\x07\n\x03GPU\x10\x022\x11*\x030,1J\n\n\x06\n\x04\x00`\xcbD\n\x008\x01\x82\x01\x00') ctx = <tensorflow.python.eager.context.Context object at 0x7f2438122cd0> name = None

def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
  """Execute a TensorFlow operation.

  Args:
    op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
      execute.
    num_outputs: The number of outputs of the operation to fetch. (Explicitly
      provided instead of being inferred for performance reasons).
    inputs: A list of inputs to the operation. Each entry should be a Tensor, or
      a value which can be passed to the Tensor constructor to create one.
    attrs: A tuple with alternating string attr names and attr values for this
      operation.
    ctx: The value of context.context().
    name: Customized name for the operation.

  Returns:
    List of output Tensor objects. The list is empty if there are no outputs

  Raises:
    An exception on error.
  """
  device_name = ctx.device_name
  # pylint: disable=protected-access
  try:
    ctx.ensure_initialized()
  tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
                                        inputs, attrs, num_outputs)

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: E
E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E 2 root error(s) found. E (0) INVALID_ARGUMENT: input must have at least k columns. Had 8, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E [[assert_greater_equal/Assert/AssertGuard/pivot_f/_22/_29]] E (1) INVALID_ARGUMENT: input must have at least k columns. Had 8, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E 0 successful operations. E 0 derived errors ignored. [Op:__inference_test_function_700105]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 2.9609 - auc: 0.8033 - recall_at_5: 0.1000 - recall_at_10: 0.8000 - mrr_at_10: 0.1867 - ndcg_at_10: 0.3266 - regularization_loss: 7.5796e-05 1/1 [==============================] - 2s 2s/step - loss: 2.9609 - auc: 0.8033 - recall_at_5: 0.1000 - recall_at_10: 0.8000 - mrr_at_10: 0.1867 - ndcg_at_10: 0.3266 - regularization_loss: 7.5796e-05 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_2_layer_call_fn while saving (showing 5 of 22). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. _________ test_two_tower_model_with_custom_options[bpr-max-True-False] _________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f221b13f760> run_eagerly = False, logits_pop_logq_correction = True, loss = 'bpr-max'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)

    losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)
    assert len(losses.epoch) == 1
    assert all(measure >= 0 for metric in losses.history for measure in losses.history[metric])
  metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1)

tests/unit/tf/models/test_retrieval.py:183:


merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:986: in _call return self._concrete_stateful_fn._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:497: in call outputs = execute.execute(


op_name = '__inference_test_function_710253', num_outputs = 7 inputs = [<tf.Tensor: shape=(), dtype=resource, value=<ResourceHandle(name="Resource-12403-at-0x6fb54a20", device="/job:localho...ce:GPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [8,2] ]")>>, ...] attrs = ('executor_type', '', 'config_proto', b'\n\x07\n\x03CPU\x10\x01\n\x07\n\x03GPU\x10\x022\x11*\x030,1J\n\n\x06\n\x04\x00`\xcbD\n\x008\x01\x82\x01\x00') ctx = <tensorflow.python.eager.context.Context object at 0x7f2438122cd0> name = None

def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
  """Execute a TensorFlow operation.

  Args:
    op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
      execute.
    num_outputs: The number of outputs of the operation to fetch. (Explicitly
      provided instead of being inferred for performance reasons).
    inputs: A list of inputs to the operation. Each entry should be a Tensor, or
      a value which can be passed to the Tensor constructor to create one.
    attrs: A tuple with alternating string attr names and attr values for this
      operation.
    ctx: The value of context.context().
    name: Customized name for the operation.

  Returns:
    List of output Tensor objects. The list is empty if there are no outputs

  Raises:
    An exception on error.
  """
  device_name = ctx.device_name
  # pylint: disable=protected-access
  try:
    ctx.ensure_initialized()
  tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
                                        inputs, attrs, num_outputs)

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: E
E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E 2 root error(s) found. E (0) INVALID_ARGUMENT: input must have at least k columns. Had 8, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E [[BPRmaxLoss/assert_less_equal/Assert/AssertGuard/pivot_f/_22/_61]] E (1) INVALID_ARGUMENT: input must have at least k columns. Had 8, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E 0 successful operations. E 0 derived errors ignored. [Op:__inference_test_function_710253]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 61.7987 - auc: 0.9428 - recall_at_5: 1.0000 - recall_at_10: 1.0000 - mrr_at_10: 0.2817 - ndcg_at_10: 0.4548 - regularization_loss: 8.0154e-05 1/1 [==============================] - 3s 3s/step - loss: 61.7987 - auc: 0.9428 - recall_at_5: 1.0000 - recall_at_10: 1.0000 - mrr_at_10: 0.2817 - ndcg_at_10: 0.4548 - regularization_loss: 8.0154e-05 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_3_layer_call_fn while saving (showing 5 of 22). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. ________ test_two_tower_model_with_custom_options[bpr-max-False-False] _________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f22101b5d00> run_eagerly = False, logits_pop_logq_correction = False, loss = 'bpr-max'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)

    losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)
    assert len(losses.epoch) == 1
    assert all(measure >= 0 for metric in losses.history for measure in losses.history[metric])
  metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1)

tests/unit/tf/models/test_retrieval.py:183:


merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:986: in _call return self._concrete_stateful_fn._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:497: in call outputs = execute.execute(


op_name = '__inference_test_function_720192', num_outputs = 7 inputs = [<tf.Tensor: shape=(), dtype=resource, value=<ResourceHandle(name="Resource-12529-at-0x6c713e70", device="/job:localho...ce:GPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [8,2] ]")>>, ...] attrs = ('executor_type', '', 'config_proto', b'\n\x07\n\x03CPU\x10\x01\n\x07\n\x03GPU\x10\x022\x11*\x030,1J\n\n\x06\n\x04\x00`\xcbD\n\x008\x01\x82\x01\x00') ctx = <tensorflow.python.eager.context.Context object at 0x7f2438122cd0> name = None

def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
  """Execute a TensorFlow operation.

  Args:
    op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
      execute.
    num_outputs: The number of outputs of the operation to fetch. (Explicitly
      provided instead of being inferred for performance reasons).
    inputs: A list of inputs to the operation. Each entry should be a Tensor, or
      a value which can be passed to the Tensor constructor to create one.
    attrs: A tuple with alternating string attr names and attr values for this
      operation.
    ctx: The value of context.context().
    name: Customized name for the operation.

  Returns:
    List of output Tensor objects. The list is empty if there are no outputs

  Raises:
    An exception on error.
  """
  device_name = ctx.device_name
  # pylint: disable=protected-access
  try:
    ctx.ensure_initialized()
  tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
                                        inputs, attrs, num_outputs)

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: E
E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E 2 root error(s) found. E (0) INVALID_ARGUMENT: input must have at least k columns. Had 7, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E [[assert_less_equal/Assert/AssertGuard/pivot_f/_42/_105]] E (1) INVALID_ARGUMENT: input must have at least k columns. Had 7, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E 0 successful operations. E 0 derived errors ignored. [Op:__inference_test_function_720192]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 34.2799 - auc: 0.8080 - recall_at_5: 0.1400 - recall_at_10: 0.8400 - mrr_at_10: 0.2417 - ndcg_at_10: 0.3780 - regularization_loss: 9.3127e-05 1/1 [==============================] - 3s 3s/step - loss: 34.2799 - auc: 0.8080 - recall_at_5: 0.1400 - recall_at_10: 0.8400 - mrr_at_10: 0.2417 - ndcg_at_10: 0.3780 - regularization_loss: 9.3127e-05 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_2_layer_call_fn while saving (showing 5 of 22). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. ___ test_two_tower_model_with_custom_options[binary_crossentropy-True-True] ____

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f220a3090d0> run_eagerly = True, logits_pop_logq_correction = True loss = 'binary_crossentropy'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)

    losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)
    assert len(losses.epoch) == 1
    assert all(measure >= 0 for metric in losses.history for measure in losses.history[metric])
  metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1)

tests/unit/tf/models/test_retrieval.py:183:


merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1557: in test_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1546: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1535: in run_step outputs = model.test_step(data) merlin/models/tf/models/base.py:684: in test_step outputs = self.pre_eval_topk.call_outputs(outputs) merlin/models/tf/core/index.py:254: in call_outputs pred_top_scores, top_ids = self(queries, k=self._k) merlin/models/config/schema.py:58: in call return super().call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler raise new_e.with_traceback(e.traceback) from None /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler return fn(*args, **kwargs) merlin/models/tf/core/index.py:229: in call top_scores, top_indices = tf.math.top_k(scores, k=k) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler return dispatch_target(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/nn_ops.py:5759: in top_k return gen_nn_ops.top_kv2(input, k=k, sorted=sorted, name=name) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py:11504: in top_kv2 _ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling layer "top_k_index_block" (type TopKIndexBlock). E
E input must have at least k columns. Had 9, needed 10 [Op:TopKV2] E
E Call arguments received by layer "top_k_index_block" (type TopKIndexBlock): E • inputs=<tf.Variable 'retrieval_model/query:0' shape=(None, 2) dtype=float32, numpy= E array([[0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189], E [0.01791797, 0.02127189]], dtype=float32)> E • k=tf.Tensor(shape=(), dtype=int32) E • kwargs={'training': 'None'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 3.8725 - auc: 0.8712 - recall_at_5: 0.1000 - recall_at_10: 1.0000 - mrr_at_10: 0.1594 - ndcg_at_10: 0.3441 - regularization_loss: 9.1021e-05 1/1 [==============================] - 1s 938ms/step - loss: 3.8725 - auc: 0.8712 - recall_at_5: 0.1000 - recall_at_10: 1.0000 - mrr_at_10: 0.1594 - ndcg_at_10: 0.3441 - regularization_loss: 9.1021e-05 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_3_layer_call_fn while saving (showing 5 of 22). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. ___ test_two_tower_model_with_custom_options[binary_crossentropy-False-True] ___

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f2209efd5e0> run_eagerly = True, logits_pop_logq_correction = False loss = 'binary_crossentropy'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)

    losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)
    assert len(losses.epoch) == 1
    assert all(measure >= 0 for metric in losses.history for measure in losses.history[metric])
  metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1)

tests/unit/tf/models/test_retrieval.py:183:


merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1557: in test_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1546: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1535: in run_step outputs = model.test_step(data) merlin/models/tf/models/base.py:684: in test_step outputs = self.pre_eval_topk.call_outputs(outputs) merlin/models/tf/core/index.py:254: in call_outputs pred_top_scores, top_ids = self(queries, k=self._k) merlin/models/config/schema.py:58: in call return super().call(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler raise new_e.with_traceback(e.traceback) from None /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler return fn(*args, **kwargs) merlin/models/tf/core/index.py:229: in call top_scores, top_indices = tf.math.top_k(scores, k=k) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: in op_dispatch_handler return dispatch_target(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/nn_ops.py:5759: in top_k return gen_nn_ops.top_kv2(input, k=k, sorted=sorted, name=name) /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py:11504: in top_kv2 _ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling layer "top_k_index_block" (type TopKIndexBlock). E
E input must have at least k columns. Had 9, needed 10 [Op:TopKV2] E
E Call arguments received by layer "top_k_index_block" (type TopKIndexBlock): E • inputs=<tf.Variable 'retrieval_model/query:0' shape=(None, 2) dtype=float32, numpy= E array([[-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.01575631, -0.07651653], E [-0.00480842, 0.05096143]], dtype=float32)> E • k=tf.Tensor(shape=(), dtype=int32) E • kwargs={'training': 'None'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 0.2670 - auc: 0.7406 - recall_at_5: 0.1400 - recall_at_10: 0.8400 - mrr_at_10: 0.1705 - ndcg_at_10: 0.3146 - regularization_loss: 7.1548e-05 1/1 [==============================] - 2s 2s/step - loss: 0.2670 - auc: 0.7406 - recall_at_5: 0.1400 - recall_at_10: 0.8400 - mrr_at_10: 0.1705 - ndcg_at_10: 0.3146 - regularization_loss: 7.1548e-05 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_2_layer_call_fn while saving (showing 5 of 22). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. __ test_two_tower_model_with_custom_options[binary_crossentropy-False-False] ___

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f2213049d90> run_eagerly = False, logits_pop_logq_correction = False loss = 'binary_crossentropy'

@pytest.mark.parametrize("run_eagerly", [True, False])
@pytest.mark.parametrize("logits_pop_logq_correction", [True, False])
@pytest.mark.parametrize("loss", ["categorical_crossentropy", "bpr-max", "binary_crossentropy"])
def test_two_tower_model_with_custom_options(
    ecommerce_data: Dataset,
    run_eagerly,
    logits_pop_logq_correction,
    loss,
):
    from tensorflow.keras import regularizers

    from merlin.models.tf.transforms.bias import PopularityLogitsCorrection
    from merlin.models.utils import schema_utils

    data = ecommerce_data
    data.schema = data.schema.select_by_name(["user_categories", "item_id"])

    metrics = [
        tf.keras.metrics.AUC(from_logits=True, name="auc"),
        mm.RecallAt(5),
        mm.RecallAt(10),
        mm.MRRAt(10),
        mm.NDCGAt(10),
    ]

    post_logits = None
    if logits_pop_logq_correction:
        cardinalities = schema_utils.categorical_cardinalities(data.schema)
        item_id_cardinalities = cardinalities[
            data.schema.select_by_tag(Tags.ITEM_ID).column_names[0]
        ]
        items_frequencies = tf.sort(
            tf.random.uniform((item_id_cardinalities,), minval=0, maxval=1000, dtype=tf.int32)
        )
        post_logits = PopularityLogitsCorrection(
            items_frequencies,
            schema=data.schema,
        )

    retrieval_task = mm.ItemRetrievalTask(
        samplers=[mm.InBatchSampler()],
        schema=data.schema,
        logits_temperature=0.1,
        post_logits=post_logits,
        store_negative_ids=True,
    )

    model = mm.TwoTowerModel(
        data.schema,
        query_tower=mm.MLPBlock(
            [2],
            activation="relu",
            no_activation_last_layer=True,
            dropout=0.1,
            kernel_regularizer=regularizers.l2(1e-5),
            bias_regularizer=regularizers.l2(1e-6),
        ),
        embedding_options=mm.EmbeddingOptions(
            infer_embedding_sizes=True,
            infer_embedding_sizes_multiplier=3.0,
            infer_embeddings_ensure_dim_multiple_of_8=True,
            embeddings_l2_reg=1e-5,
        ),
        prediction_tasks=retrieval_task,
    )

    model.compile(optimizer="adam", run_eagerly=run_eagerly, loss=loss, metrics=metrics)

    losses = model.fit(data, batch_size=50, epochs=1, steps_per_epoch=1)
    assert len(losses.epoch) == 1
    assert all(measure >= 0 for metric in losses.history for measure in losses.history[metric])
  metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1)

tests/unit/tf/models/test_retrieval.py:183:


merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:986: in _call return self._concrete_stateful_fn._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:497: in call outputs = execute.execute(


op_name = '__inference_test_function_736237', num_outputs = 7 inputs = [<tf.Tensor: shape=(), dtype=resource, value=<ResourceHandle(name="Resource-12783-at-0x65cc0240", device="/job:localho...ce:GPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [8,2] ]")>>, ...] attrs = ('executor_type', '', 'config_proto', b'\n\x07\n\x03CPU\x10\x01\n\x07\n\x03GPU\x10\x022\x11*\x030,1J\n\n\x06\n\x04\x00`\xcbD\n\x008\x01\x82\x01\x00') ctx = <tensorflow.python.eager.context.Context object at 0x7f2438122cd0> name = None

def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
  """Execute a TensorFlow operation.

  Args:
    op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
      execute.
    num_outputs: The number of outputs of the operation to fetch. (Explicitly
      provided instead of being inferred for performance reasons).
    inputs: A list of inputs to the operation. Each entry should be a Tensor, or
      a value which can be passed to the Tensor constructor to create one.
    attrs: A tuple with alternating string attr names and attr values for this
      operation.
    ctx: The value of context.context().
    name: Customized name for the operation.

  Returns:
    List of output Tensor objects. The list is empty if there are no outputs

  Raises:
    An exception on error.
  """
  device_name = ctx.device_name
  # pylint: disable=protected-access
  try:
    ctx.ensure_initialized()
  tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
                                        inputs, attrs, num_outputs)

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: E
E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 183, in test_two_tower_model_with_custom_options E metrics = model.evaluate(data, batch_size=10, item_corpus=data, return_dict=True, steps=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E 2 root error(s) found. E (0) INVALID_ARGUMENT: input must have at least k columns. Had 9, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E [[assert_greater_equal/Assert/AssertGuard/pivot_f/_22/_29]] E (1) INVALID_ARGUMENT: input must have at least k columns. Had 9, needed 10 E [[{{node top_k_index_block/TopKV2}}]] E 0 successful operations. E 0 derived errors ignored. [Op:__inference_test_function_736237]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/1 [==============================] - ETA: 0s - loss: 0.1169 - auc: 0.8293 - recall_at_5: 0.8000 - recall_at_10: 0.8000 - mrr_at_10: 0.3500 - ndcg_at_10: 0.4626 - regularization_loss: 7.5446e-05 1/1 [==============================] - 2s 2s/step - loss: 0.1169 - auc: 0.8293 - recall_at_5: 0.8000 - recall_at_10: 0.8000 - mrr_at_10: 0.3500 - ndcg_at_10: 0.4626 - regularization_loss: 7.5446e-05 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_2_layer_call_fn while saving (showing 5 of 22). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. _______________________ test_two_tower_advanced_options ________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f220b0443d0>

def test_two_tower_advanced_options(ecommerce_data):
    train_ds, eval_ds = ecommerce_data, ecommerce_data
  metrics = retrieval_tests_common.train_eval_two_tower_for_lastfm(
        train_ds,
        eval_ds,
        train_epochs=1,
        train_steps_per_epoch=None,
        eval_steps=None,
        train_batch_size=16,
        eval_batch_size=16,
        topk_metrics_cutoffs="10",
        log_to_wandb=False,
    )

tests/unit/tf/models/test_retrieval.py:280:


tests/common/tf/retrieval/retrieval_tests_common.py:174: in train_eval_two_tower_for_lastfm return train_eval_two_tower( tests/common/tf/retrieval/retrieval_tests_common.py:118: in train_eval_two_tower metrics = runner.run(hparams) tests/common/tf/retrieval/retrieval_utils.py:534: in run train_metrics = self.model.evaluate( merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:986: in _call return self._concrete_stateful_fn._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:497: in call outputs = execute.execute(


op_name = '__inference_test_function_793612', num_outputs = 5 inputs = [<tf.Tensor: shape=(), dtype=resource, value=<ResourceHandle(name="Resource-13534-at-0x76aaacd0", device="/job:localho...ce:GPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [3,8] ]")>>, ...] attrs = ('executor_type', '', 'config_proto', b'\n\x07\n\x03CPU\x10\x01\n\x07\n\x03GPU\x10\x022\x11*\x030,1J\n\n\x06\n\x04\x00`\xcbD\n\x008\x01\x82\x01\x00') ctx = <tensorflow.python.eager.context.Context object at 0x7f2438122cd0> name = None

def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
  """Execute a TensorFlow operation.

  Args:
    op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
      execute.
    num_outputs: The number of outputs of the operation to fetch. (Explicitly
      provided instead of being inferred for performance reasons).
    inputs: A list of inputs to the operation. Each entry should be a Tensor, or
      a value which can be passed to the Tensor constructor to create one.
    attrs: A tuple with alternating string attr names and attr values for this
      operation.
    ctx: The value of context.context().
    name: Customized name for the operation.

  Returns:
    List of output Tensor objects. The list is empty if there are no outputs

  Raises:
    An exception on error.
  """
  device_name = ctx.device_name
  # pylint: disable=protected-access
  try:
    ctx.ensure_initialized()
  tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
                                        inputs, attrs, num_outputs)

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: E
E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 280, in test_two_tower_advanced_options E metrics = retrieval_tests_common.train_eval_two_tower_for_lastfm( E File "/var/jenkins_home/workspace/merlin_models/models/tests/common/tf/retrieval/retrieval_tests_common.py", line 174, in train_eval_two_tower_for_lastfm E return train_eval_two_tower( E File "/var/jenkins_home/workspace/merlin_models/models/tests/common/tf/retrieval/retrieval_tests_common.py", line 118, in train_eval_two_tower E metrics = runner.run(hparams) E File "/var/jenkins_home/workspace/merlin_models/models/tests/common/tf/retrieval/retrieval_utils.py", line 534, in run E train_metrics = self.model.evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E input must have at least k columns. Had 9, needed 10 E [[{{node top_k_index_block/TopKV2}}]] [Op:__inference_test_function_793612]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/6 [====>.........................] - ETA: 19s - loss: 184.6657 - recall_at_10: 1.0000 - mrr_at_10: 0.2854 - ndcg_at_10: 0.4546 - regularization_loss: 6.0310e-04 4/6 [===================>..........] - ETA: 0s - loss: 160.5181 - recall_at_10: 1.0000 - mrr_at_10: 0.2854 - ndcg_at_10: 0.4546 - regularization_loss: 6.4865e-04 6/6 [==============================] - 4s 19ms/step - loss: 172.3312 - recall_at_10: 1.0000 - mrr_at_10: 0.2854 - ndcg_at_10: 0.4546 - regularization_loss: 6.7064e-04 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, parallel_block_layer_call_fn, parallel_block_layer_call_and_return_conditional_losses, sequential_block_3_layer_call_fn while saving (showing 5 of 54). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. ___________________________ test_mf_advanced_options ___________________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f2202c0e1c0>

def test_mf_advanced_options(ecommerce_data):
    train_ds, eval_ds = ecommerce_data, ecommerce_data
  metrics = retrieval_tests_common.train_eval_mf_for_lastfm(
        train_ds,
        eval_ds,
        train_epochs=1,
        train_steps_per_epoch=None,
        eval_steps=None,
        train_batch_size=16,
        eval_batch_size=16,
        topk_metrics_cutoffs="10",
        log_to_wandb=False,
    )

tests/unit/tf/models/test_retrieval.py:299:


tests/common/tf/retrieval/retrieval_tests_common.py:201: in train_eval_mf_for_lastfm return train_eval_mf( tests/common/tf/retrieval/retrieval_tests_common.py:158: in train_eval_mf metrics = runner.run(hparams) tests/common/tf/retrieval/retrieval_utils.py:534: in run train_metrics = self.model.evaluate( merlin/models/tf/models/base.py:1395: in evaluate return super().evaluate( merlin/models/tf/models/base.py:876: in evaluate out = super().evaluate( /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1756: in evaluate tmp_logs = self.test_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:986: in _call return self._concrete_stateful_fn._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:497: in call outputs = execute.execute(


op_name = '__inference_test_function_797416', num_outputs = 5 inputs = [<tf.Tensor: shape=(), dtype=resource, value=<ResourceHandle(name="Resource-13577-at-0x6c7ffb10", device="/job:localho...low::Var", dtype and shapes : "[ DType enum: 1, Shape: [?,64] ]")>>, <tf.Tensor: shape=(), dtype=int32, numpy=10>, ...] attrs = ('executor_type', '', 'config_proto', b'\n\x07\n\x03CPU\x10\x01\n\x07\n\x03GPU\x10\x022\x11*\x030,1J\n\n\x06\n\x04\x00`\xcbD\n\x008\x01\x82\x01\x00') ctx = <tensorflow.python.eager.context.Context object at 0x7f2438122cd0> name = None

def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
  """Execute a TensorFlow operation.

  Args:
    op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
      execute.
    num_outputs: The number of outputs of the operation to fetch. (Explicitly
      provided instead of being inferred for performance reasons).
    inputs: A list of inputs to the operation. Each entry should be a Tensor, or
      a value which can be passed to the Tensor constructor to create one.
    attrs: A tuple with alternating string attr names and attr values for this
      operation.
    ctx: The value of context.context().
    name: Customized name for the operation.

  Returns:
    List of output Tensor objects. The list is empty if there are no outputs

  Raises:
    An exception on error.
  """
  device_name = ctx.device_name
  # pylint: disable=protected-access
  try:
    ctx.ensure_initialized()
  tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
                                        inputs, attrs, num_outputs)

E tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error: E
E Detected at node 'top_k_index_block/TopKV2' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/pytest/main.py", line 5, in E raise SystemExit(pytest.console_main()) E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 187, in console_main E code = main() E File "/usr/local/lib/python3.8/dist-packages/_pytest/config/init.py", line 164, in main E ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 315, in pytest_cmdline_main E return wrap_session(config, _main) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 268, in wrap_session E session.exitstatus = doit(config, session) or 0 E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 322, in _main E config.hook.pytest_runtestloop(session=session) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 347, in pytest_runtestloop E item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 111, in pytest_runtest_protocol E runtestprotocol(item, nextitem=nextitem) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 130, in runtestprotocol E reports.append(call_and_report(item, "call", log)) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 219, in call_and_report E call = call_runtest_hook(item, when, **kwds) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 258, in call_runtest_hook E return CallInfo.from_call( E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 338, in from_call E result: Optional[TResult] = func() E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 259, in E lambda: ihook(item=item, **kwds), when=when, reraise=reraise E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 166, in pytest_runtest_call E item.runtest() E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 1761, in runtest E self.ihook.pytest_pyfunc_call(pyfuncitem=self) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 265, in call E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) E File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall E res = hook_impl.function(*args) E File "/usr/local/lib/python3.8/dist-packages/_pytest/python.py", line 192, in pytest_pyfunc_call E result = testfunction(**testargs) E File "/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py", line 299, in test_mf_advanced_options E metrics = retrieval_tests_common.train_eval_mf_for_lastfm( E File "/var/jenkins_home/workspace/merlin_models/models/tests/common/tf/retrieval/retrieval_tests_common.py", line 201, in train_eval_mf_for_lastfm E return train_eval_mf( E File "/var/jenkins_home/workspace/merlin_models/models/tests/common/tf/retrieval/retrieval_tests_common.py", line 158, in train_eval_mf E metrics = runner.run(hparams) E File "/var/jenkins_home/workspace/merlin_models/models/tests/common/tf/retrieval/retrieval_utils.py", line 534, in run E train_metrics = self.model.evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 1395, in evaluate E return super().evaluate( E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 876, in evaluate E out = super().evaluate( E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1756, in evaluate E tmp_logs = self.test_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1557, in test_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1546, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1535, in run_step E outputs = model.test_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 684, in test_step E outputs = self.pre_eval_topk.call_outputs(outputs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 254, in call_outputs E pred_top_scores, top_ids = self(queries, k=self._k) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/config/schema.py", line 58, in call E return super().call(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 60, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call E outputs = call_fn(inputs, *args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler E return fn(*args, **kwargs) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/index.py", line 229, in call E top_scores, top_indices = tf.math.top_k(scores, k=k) E Node: 'top_k_index_block/TopKV2' E input must have at least k columns. Had 8, needed 10 E [[{{node top_k_index_block/TopKV2}}]] [Op:__inference_test_function_797416]

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54: InvalidArgumentError ----------------------------- Captured stdout call -----------------------------

1/6 [====>.........................] - ETA: 8s - loss: 2.8899 - recall_at_10: 1.0000 - mrr_at_10: 0.2507 - ndcg_at_10: 0.4223 - regularization_loss: 1.2419e-06 6/6 [==============================] - 2s 6ms/step - loss: 2.8490 - recall_at_10: 1.0000 - mrr_at_10: 0.2507 - ndcg_at_10: 0.4223 - regularization_loss: 1.4957e-06 ----------------------------- Captured stderr call ----------------------------- WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually. ------------------------------ Captured log call ------------------------------- WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. WARNING absl:save.py:233 Found untraced functions such as model_context_layer_call_fn, model_context_layer_call_and_return_conditional_losses, sequential_block_3_layer_call_fn, sequential_block_3_layer_call_and_return_conditional_losses, concat_features_1_layer_call_fn while saving (showing 5 of 14). These functions will not be directly callable after loading. WARNING tensorflow:load.py:167 No training configuration found in save file, so the model was not compiled. Compile it manually. WARNING merlin_models:api.py:446 The sampler InBatchSampler returned no samples for this batch. ___________ test_embedding_features_yoochoose_infer_embedding_sizes ____________

tabular_schema = [{'name': 'user_id', 'tags': {<Tags.ID: 'id'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: ... {'domain': {'min': 0.0, 'max': 0.4079650044441223}}, 'dtype': dtype('float64'), 'is_list': False, 'is_ragged': False}] torch_tabular_data = {'categories': tensor([[29, 8, 16, 10], [28, 5, 12, 5], [14, 13, 14, 15], [20, 19, 14, 26],... 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), ...}

def test_embedding_features_yoochoose_infer_embedding_sizes(tabular_schema, torch_tabular_data):
    schema = tabular_schema.select_by_tag(Tags.CATEGORICAL)

    emb_module = ml.EmbeddingFeatures.from_schema(
        schema, infer_embedding_sizes=True, infer_embedding_sizes_multiplier=3.0
    )
  assert emb_module.embedding_tables["item_id"].weight.shape[1] == 46

E assert 9 == 46

tests/unit/torch/features/test_embedding.py:114: AssertionError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 58 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 2 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/test_optimizer.py::test_lazy_adam_for_large_embeddings[True] tests/unit/tf/blocks/test_optimizer.py::test_lazy_adam_for_large_embeddings[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/blocks/optimizer.py:431: UserWarning: All embedding tables in given ParallelBlock embeddings have smaller input dim than threshold 1000, thus return empty list. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file66coxdfy.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 24 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 16 failed, 747 passed, 12 skipped, 1208 warnings in 1472.52s (0:24:32) ==== Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins387030314415713086.sh

nvidia-merlin-bot avatar Oct 24 '22 13:10 nvidia-merlin-bot

Will re-visit this in future work targetted the slowest tests first. The approch in this PR of reducing the cardinality of some of the example data requires more careful work, since some of the unit tests depend on these values.

oliverholworthy avatar Feb 13 '23 16:02 oliverholworthy