models icon indicating copy to clipboard operation
models copied to clipboard

[WIP] Add horovod for distributed training

Open edknv opened this issue 3 years ago • 1 comments

WIP. Moved from #783 because CI was stuck.

edknv avatar Oct 26 '22 02:10 edknv

Click to view CI Results
GitHub pull request #825 of commit 7f8e8285053d8edcf3d882a1b22de44265213343, no merge conflicts.
Running as SYSTEM
Setting status of 7f8e8285053d8edcf3d882a1b22de44265213343 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1595/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 7f8e8285053d8edcf3d882a1b22de44265213343^{commit} # timeout=10
Checking out Revision 7f8e8285053d8edcf3d882a1b22de44265213343 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7f8e8285053d8edcf3d882a1b22de44265213343 # timeout=10
Commit message: "add test for saving model"
 > git rev-list --no-walk a38c681cdf33b58ee72e39bfe19d0597398085e8 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins5358632594242111848.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 782 items / 1 skipped

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 24%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s....................... [ 49%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s............................. [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 82%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filefa4t5l97.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/tf/horovod/init.py:18: could not import 'horovod': No module named 'horovod' SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ========= 770 passed, 13 skipped, 1217 warnings in 1474.93s (0:24:34) ========== ./ci/test_unit.sh: line 23: horovodrun: command not found Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins14134340838537099359.sh

nvidia-merlin-bot avatar Oct 26 '22 22:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 4002c027519c08928eed723873f94c5998c8264d, no merge conflicts.
Running as SYSTEM
Setting status of 4002c027519c08928eed723873f94c5998c8264d to PENDING with url https://10.20.13.93:8080/job/merlin_models/1596/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 4002c027519c08928eed723873f94c5998c8264d^{commit} # timeout=10
Checking out Revision 4002c027519c08928eed723873f94c5998c8264d (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4002c027519c08928eed723873f94c5998c8264d # timeout=10
Commit message: "adjust learning rate for multi optimizer"
 > git rev-list --no-walk 7f8e8285053d8edcf3d882a1b22de44265213343 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins10469993458635521327.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 782 items / 1 skipped

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 24%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s....................... [ 49%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s............................. [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 82%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filextgj4flq.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/tf/horovod/init.py:18: could not import 'horovod': No module named 'horovod' SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ========= 770 passed, 13 skipped, 1217 warnings in 1462.93s (0:24:22) ========== ./ci/test_unit.sh: line 23: horovodrun: command not found Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins12290184539263852200.sh

nvidia-merlin-bot avatar Oct 27 '22 00:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 05b5e8eeb2b36035c2799b865dc1d052fdc32c77, no merge conflicts.
Running as SYSTEM
Setting status of 05b5e8eeb2b36035c2799b865dc1d052fdc32c77 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1597/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 05b5e8eeb2b36035c2799b865dc1d052fdc32c77^{commit} # timeout=10
Checking out Revision 05b5e8eeb2b36035c2799b865dc1d052fdc32c77 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 05b5e8eeb2b36035c2799b865dc1d052fdc32c77 # timeout=10
Commit message: "fix world size in test"
 > git rev-list --no-walk 4002c027519c08928eed723873f94c5998c8264d # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins9869607412408033195.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 782 items / 1 skipped

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 24%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s....................... [ 49%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s............................. [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 82%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_fileyhl2xn68.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/tf/horovod/init.py:18: could not import 'horovod': No module named 'horovod' SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ========= 770 passed, 13 skipped, 1217 warnings in 1479.84s (0:24:39) ========== ./ci/test_unit.sh: line 23: horovodrun: command not found Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins13478634521628721540.sh

nvidia-merlin-bot avatar Oct 27 '22 01:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit ff3f49d07988198fdfd968c38862f5679c55fe89, no merge conflicts.
Running as SYSTEM
Setting status of ff3f49d07988198fdfd968c38862f5679c55fe89 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1598/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse ff3f49d07988198fdfd968c38862f5679c55fe89^{commit} # timeout=10
Checking out Revision ff3f49d07988198fdfd968c38862f5679c55fe89 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ff3f49d07988198fdfd968c38862f5679c55fe89 # timeout=10
Commit message: "test retrieval model"
 > git rev-list --no-walk 05b5e8eeb2b36035c2799b865dc1d052fdc32c77 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins1681800179966325976.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 782 items / 1 skipped

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 18%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 24%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 29%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s....................... [ 49%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s............................. [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 82%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filelz4014yk.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/tf/horovod/init.py:18: could not import 'horovod': No module named 'horovod' SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ========= 770 passed, 13 skipped, 1217 warnings in 1478.92s (0:24:38) ========== Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Collecting horovod[tensorflow] Downloading horovod-0.26.1.tar.gz (3.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 23.8 MB/s eta 0:00:00 Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Requirement already satisfied: cloudpickle in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (2.2.0) Requirement already satisfied: psutil in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (5.9.2) Requirement already satisfied: pyyaml in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (5.4.1) Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (21.3) Requirement already satisfied: cffi>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (1.15.1) Requirement already satisfied: tensorflow in /var/jenkins_home/.local/lib/python3.8/site-packages (from horovod[tensorflow]) (2.6.2) Requirement already satisfied: pycparser in /usr/local/lib/python3.8/dist-packages (from cffi>=1.4.0->horovod[tensorflow]) (2.21) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->horovod[tensorflow]) (3.0.9) Collecting h5py~=3.1.0 Downloading h5py-3.1.0-cp38-cp38-manylinux1_x86_64.whl (4.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 115.7 MB/s eta 0:00:00 Collecting tensorflow-estimator<2.7,>=2.6.0 Downloading tensorflow_estimator-2.6.0-py2.py3-none-any.whl (462 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 462.9/462.9 kB 105.2 MB/s eta 0:00:00 Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (1.6.3) Collecting termcolor~=1.1.0 Downloading termcolor-1.1.0.tar.gz (3.9 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting tensorboard<2.7,>=2.6.0 Downloading tensorboard-2.6.0-py3-none-any.whl (5.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.6/5.6 MB 38.0 MB/s eta 0:00:00 Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (0.2.0) Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (3.3.0) Collecting absl-py~=0.10 Downloading absl_py-0.15.0-py3-none-any.whl (132 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.0/132.0 kB 137.5 MB/s eta 0:00:00 Requirement already satisfied: grpcio<2.0,>=1.37.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (1.41.0) Requirement already satisfied: clang~=5.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from tensorflow->horovod[tensorflow]) (5.0) Requirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (1.1.2) Collecting numpy~=1.19.2 Downloading numpy-1.19.5-cp38-cp38-manylinux2010_x86_64.whl (14.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.9/14.9 MB 109.3 MB/s eta 0:00:00 Requirement already satisfied: wrapt~=1.12.1 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (1.12.1) Collecting typing-extensions~=3.7.4 Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB) Requirement already satisfied: wheel~=0.35 in /var/jenkins_home/.local/lib/python3.8/site-packages (from tensorflow->horovod[tensorflow]) (0.37.1) Requirement already satisfied: flatbuffers~=1.12.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (1.12) Requirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (0.4.0) Collecting keras<2.7,>=2.6.0 Downloading keras-2.6.0-py2.py3-none-any.whl (1.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 129.5 MB/s eta 0:00:00 Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.8/dist-packages (from tensorflow->horovod[tensorflow]) (3.19.5) Requirement already satisfied: six~=1.15.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from tensorflow->horovod[tensorflow]) (1.15.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.8/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (0.4.6) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (0.6.1) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.8/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (3.4.1) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (1.8.1) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.8/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (2.2.2) Collecting google-auth<2,>=1.6.3 Downloading google_auth-1.35.0-py2.py3-none-any.whl (152 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 152.9/152.9 kB 165.5 MB/s eta 0:00:00 Requirement already satisfied: setuptools>=41.0.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (62.4.0) Requirement already satisfied: requests<3,>=2.21.0 in /usr/lib/python3/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (2.22.0) Collecting cachetools<5.0,>=2.0.0 Downloading cachetools-4.2.4-py3-none-any.whl (10 kB) Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.8/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (4.7.2) Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.8/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (0.2.8) Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.8/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (1.3.1) Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.8/dist-packages (from markdown>=2.6.8->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (4.12.0) Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.8/dist-packages (from werkzeug>=0.11.15->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (2.1.1) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (3.8.1) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.8/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (0.4.8) Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.7,>=2.6.0->tensorflow->horovod[tensorflow]) (3.2.1) Building wheels for collected packages: horovod, termcolor Building wheel for horovod (setup.py): started Building wheel for horovod (setup.py): still running... Building wheel for horovod (setup.py): finished with status 'done' Created wheel for horovod: filename=horovod-0.26.1-cp38-cp38-linux_x86_64.whl size=35398804 sha256=2e2a8a7236905616e5aaf09968a5317666634f2cf87b4bd7af7405b8c417f3e8 Stored in directory: /tmp/pip-ephem-wheel-cache-c40und26/wheels/eb/0b/90/d53058f75f3ae3db9557f3e55dd8c016b2397e9b38557c8b66 Building wheel for termcolor (setup.py): started Building wheel for termcolor (setup.py): finished with status 'done' Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4832 sha256=8215d0230d7e4e15232632069c395542f4c1b8dc79600b19889ccc9bac0a5b1f Stored in directory: /tmp/pip-ephem-wheel-cache-c40und26/wheels/a0/16/9c/5473df82468f958445479c59e784896fa24f4a5fc024b0f501 Successfully built horovod termcolor Installing collected packages: typing-extensions, termcolor, tensorflow-estimator, keras, numpy, cachetools, absl-py, horovod, h5py, google-auth, tensorboard Attempting uninstall: typing-extensions Found existing installation: typing_extensions 4.3.0 Uninstalling typing_extensions-4.3.0: Successfully uninstalled typing_extensions-4.3.0 Attempting uninstall: termcolor Found existing installation: termcolor 2.0.1 Uninstalling termcolor-2.0.1: Successfully uninstalled termcolor-2.0.1 Attempting uninstall: tensorflow-estimator Found existing installation: tensorflow-estimator 2.9.0 Uninstalling tensorflow-estimator-2.9.0: Successfully uninstalled tensorflow-estimator-2.9.0 Attempting uninstall: keras Found existing installation: keras 2.9.0 Uninstalling keras-2.9.0: Successfully uninstalled keras-2.9.0 Attempting uninstall: numpy Found existing installation: numpy 1.22.4 Uninstalling numpy-1.22.4: ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/usr/local/bin/f2py' Consider using the --user option or check the permissions.

[notice] A new release of pip available: 22.1.2 -> 22.3 [notice] To update, run: pip install --upgrade pip Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins17781789272837666140.sh

nvidia-merlin-bot avatar Oct 27 '22 03:10 nvidia-merlin-bot

rerun tests

edknv avatar Oct 27 '22 03:10 edknv

Click to view CI Results
GitHub pull request #825 of commit 202c53e12deacb4c00ef4512d14f53ad5d8bc226, no merge conflicts.
Running as SYSTEM
Setting status of 202c53e12deacb4c00ef4512d14f53ad5d8bc226 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1600/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 202c53e12deacb4c00ef4512d14f53ad5d8bc226^{commit} # timeout=10
Checking out Revision 202c53e12deacb4c00ef4512d14f53ad5d8bc226 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 202c53e12deacb4c00ef4512d14f53ad5d8bc226 # timeout=10
Commit message: "install pip in user env"
 > git rev-list --no-walk 202c53e12deacb4c00ef4512d14f53ad5d8bc226 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins6900312402310839713.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 167 items / 49 errors / 1 skipped

==================================== ERRORS ==================================== _________________ ERROR collecting tests/unit/tf/test_core.py __________________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/test_core.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/test_core.py:4: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ________________ ERROR collecting tests/unit/tf/test_loader.py _________________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/test_loader.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/test_loader.py:26: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/test_public_api.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/test_public_api.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/test_public_api.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/blocks/test_cross.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_cross.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_cross.py:20: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/blocks/test_dlrm.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_dlrm.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_dlrm.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) __________ ERROR collecting tests/unit/tf/blocks/test_interactions.py __________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_interactions.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_interactions.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/blocks/test_mlp.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_mlp.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_mlp.py:21: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/blocks/test_optimizer.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_optimizer.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_optimizer.py:22: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _________ ERROR collecting tests/unit/tf/blocks/retrieval/test_base.py _________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/retrieval/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/retrieval/test_base.py:18: in from merlin.models.tf.blocks.retrieval.base import ItemRetrievalScorer merlin/models/tf/blocks/retrieval/base.py:27: in from merlin.models.tf.models.base import ModelBlock merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _ ERROR collecting tests/unit/tf/blocks/retrieval/test_matrix_factorization.py _ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/retrieval/test_matrix_factorization.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/retrieval/test_matrix_factorization.py:20: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______ ERROR collecting tests/unit/tf/blocks/retrieval/test_two_tower.py _______ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/retrieval/test_two_tower.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/retrieval/test_two_tower.py:22: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/core/test_aggregation.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_aggregation.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_aggregation.py:21: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _______________ ERROR collecting tests/unit/tf/core/test_base.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_base.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/core/test_combinators.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_combinators.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_combinators.py:6: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/core/test_encoder.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_encoder.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_encoder.py:6: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/core/test_index.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_index.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_index.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____________ ERROR collecting tests/unit/tf/core/test_prediction.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_prediction.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_prediction.py:7: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/core/test_tabular.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_tabular.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_tabular.py:1: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/inputs/test_continuous.py ___________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/inputs/test_continuous.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/inputs/test_continuous.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/inputs/test_embedding.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/inputs/test_embedding.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/inputs/test_embedding.py:23: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____________ ERROR collecting tests/unit/tf/inputs/test_tabular.py _____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/inputs/test_tabular.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/inputs/test_tabular.py:21: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/layers/test_queue.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/layers/test_queue.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/layers/test_queue.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/losses/test_losses.py _____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/losses/test_losses.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/losses/test_losses.py:20: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______ ERROR collecting tests/unit/tf/metrics/test_metrics_popularity.py _______ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/metrics/test_metrics_popularity.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/metrics/test_metrics_popularity.py:20: in from merlin.models.tf.metrics.evaluation import ItemCoverageAt, NoveltyAt, PopularityBiasAt merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _________ ERROR collecting tests/unit/tf/metrics/test_metrics_topk.py __________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/metrics/test_metrics_topk.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/metrics/test_metrics_topk.py:23: in from merlin.models.tf.metrics.topk import ( merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/models/test_base.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_base.py:24: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/models/test_benchmark.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_benchmark.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_benchmark.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____________ ERROR collecting tests/unit/tf/models/test_ranking.py _____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_ranking.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_ranking.py:22: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/models/test_retrieval.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_retrieval.py:6: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/outputs/test_base.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_base.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ________ ERROR collecting tests/unit/tf/outputs/test_classification.py _________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_classification.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_classification.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) __________ ERROR collecting tests/unit/tf/outputs/test_contrastive.py __________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_contrastive.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_contrastive.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) __________ ERROR collecting tests/unit/tf/outputs/test_regression.py ___________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_regression.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_regression.py:18: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/outputs/test_sampling.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_sampling.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_sampling.py:20: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____ ERROR collecting tests/unit/tf/prediction_tasks/test_classification.py ____ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/prediction_tasks/test_classification.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/prediction_tasks/test_classification.py:18: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______ ERROR collecting tests/unit/tf/prediction_tasks/test_multi_task.py ______ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/prediction_tasks/test_multi_task.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/prediction_tasks/test_multi_task.py:6: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in

nvidia-merlin-bot avatar Oct 27 '22 03:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit dc2aa26dca453669fb0294911c421534ec67a235, no merge conflicts.
Running as SYSTEM
Setting status of dc2aa26dca453669fb0294911c421534ec67a235 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1601/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse dc2aa26dca453669fb0294911c421534ec67a235^{commit} # timeout=10
Checking out Revision dc2aa26dca453669fb0294911c421534ec67a235 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f dc2aa26dca453669fb0294911c421534ec67a235 # timeout=10
Commit message: "install horovod in ci runner"
 > git rev-list --no-walk 202c53e12deacb4c00ef4512d14f53ad5d8bc226 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins9810798098761799471.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting tensorflow==2.9.2
  Downloading tensorflow-2.9.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (511.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 511.8/511.8 MB 111.0 MB/s eta 0:00:00
Collecting horovod[tensorflow]
  Downloading horovod-0.26.1.tar.gz (3.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 103.0 MB/s eta 0:00:00
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
Collecting tensorflow-estimator=2.9.0rc0
  Downloading tensorflow_estimator-2.9.0-py2.py3-none-any.whl (438 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 438.7/438.7 kB 132.1 MB/s eta 0:00:00
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.6.3)
Requirement already satisfied: gast=0.2.1 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (0.4.0)
Collecting keras=2.9.0rc0
  Downloading keras-2.9.0-py2.py3-none-any.whl (1.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 121.8 MB/s eta 0:00:00
Requirement already satisfied: six>=1.12.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from tensorflow==2.9.2) (1.15.0)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.12.1)
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (0.2.0)
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (3.7.4.3)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.1.0)
Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.2.0)
Requirement already satisfied: flatbuffers=1.12 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.12)
Requirement already satisfied: grpcio=1.24.3 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.41.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (3.3.0)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (from tensorflow==2.9.2) (62.4.0)
Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (3.7.0)
Requirement already satisfied: protobuf=3.9.2 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (3.19.5)
Requirement already satisfied: numpy>=1.20 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.22.4)
Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (1.1.2)
Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (14.0.6)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (0.27.0)
Requirement already satisfied: tensorboard=2.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (2.9.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from tensorflow==2.9.2) (21.3)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (2.2.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (5.9.2)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (5.4.1)
Requirement already satisfied: cffi>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from horovod[tensorflow]) (1.15.1)
Requirement already satisfied: wheel=0.23.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from astunparse>=1.6.0->tensorflow==2.9.2) (0.37.1)
Requirement already satisfied: pycparser in /usr/local/lib/python3.8/dist-packages (from cffi>=1.4.0->horovod[tensorflow]) (2.21)
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.8/dist-packages (from tensorboard=2.9->tensorflow==2.9.2) (2.2.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.8/dist-packages (from tensorboard=2.9->tensorflow==2.9.2) (3.4.1)
Requirement already satisfied: requests=2.21.0 in /usr/lib/python3/dist-packages (from tensorboard=2.9->tensorflow==2.9.2) (2.22.0)
Requirement already satisfied: google-auth=1.6.3 in /usr/local/lib/python3.8/dist-packages (from tensorboard=2.9->tensorflow==2.9.2) (2.11.1)
Requirement already satisfied: google-auth-oauthlib=0.4.1 in /usr/local/lib/python3.8/dist-packages (from tensorboard=2.9->tensorflow==2.9.2) (0.4.6)
Requirement already satisfied: tensorboard-data-server=0.6.0 in /usr/local/lib/python3.8/dist-packages (from tensorboard=2.9->tensorflow==2.9.2) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from tensorboard=2.9->tensorflow==2.9.2) (1.8.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->tensorflow==2.9.2) (3.0.9)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.8/dist-packages (from google-auth=1.6.3->tensorboard=2.9->tensorflow==2.9.2) (0.2.8)
Requirement already satisfied: rsa=3.1.4 in /usr/local/lib/python3.8/dist-packages (from google-auth=1.6.3->tensorboard=2.9->tensorflow==2.9.2) (4.7.2)
Requirement already satisfied: cachetools=2.0.0 in /usr/local/lib/python3.8/dist-packages (from google-auth=1.6.3->tensorboard=2.9->tensorflow==2.9.2) (5.2.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.8/dist-packages (from google-auth-oauthlib=0.4.1->tensorboard=2.9->tensorflow==2.9.2) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.8/dist-packages (from markdown>=2.6.8->tensorboard=2.9->tensorflow==2.9.2) (4.12.0)
Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.8/dist-packages (from werkzeug>=1.0.1->tensorboard=2.9->tensorflow==2.9.2) (2.1.1)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard=2.9->tensorflow==2.9.2) (3.8.1)
Requirement already satisfied: pyasn1=0.4.6 in /usr/local/lib/python3.8/dist-packages (from pyasn1-modules>=0.2.1->google-auth=1.6.3->tensorboard=2.9->tensorflow==2.9.2) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib=0.4.1->tensorboard=2.9->tensorflow==2.9.2) (3.2.1)
Building wheels for collected packages: horovod
  Building wheel for horovod (setup.py): started
  Building wheel for horovod (setup.py): still running...
  Building wheel for horovod (setup.py): finished with status 'done'
  Created wheel for horovod: filename=horovod-0.26.1-cp38-cp38-linux_x86_64.whl size=35398761 sha256=c6ef2c7dd574e0c2fdb4ecfc387c025d8e5014c38dd09c94747ae60344ef4dc0
  Stored in directory: /tmp/pip-ephem-wheel-cache-5axuln_n/wheels/eb/0b/90/d53058f75f3ae3db9557f3e55dd8c016b2397e9b38557c8b66
Successfully built horovod
Installing collected packages: keras, tensorflow-estimator, horovod, tensorflow
  WARNING: The script horovodrun is installed in '/var/jenkins_home/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  Attempting uninstall: tensorflow
    Found existing installation: tensorflow 2.6.2
    Uninstalling tensorflow-2.6.2:
      Successfully uninstalled tensorflow-2.6.2
  WARNING: The scripts estimator_ckpt_converter, import_pb_to_tensorboard, saved_model_cli, tensorboard, tf_upgrade_v2, tflite_convert, toco and toco_from_protos are installed in '/var/jenkins_home/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed horovod-0.26.1 keras-2.9.0 tensorflow-2.9.2 tensorflow-estimator-2.9.0

[notice] A new release of pip available: 22.1.2 -> 22.3 [notice] To update, run: pip install --upgrade pip ./ci/test_unit.sh: line 24: horovodrun: command not found Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins8948750554453568628.sh

nvidia-merlin-bot avatar Oct 27 '22 04:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 0adc84b6c04ae8f309090e0af806e1862c171738, no merge conflicts.
Running as SYSTEM
Setting status of 0adc84b6c04ae8f309090e0af806e1862c171738 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1602/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 0adc84b6c04ae8f309090e0af806e1862c171738^{commit} # timeout=10
Checking out Revision 0adc84b6c04ae8f309090e0af806e1862c171738 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 0adc84b6c04ae8f309090e0af806e1862c171738 # timeout=10
Commit message: "try horovod in ci again"
 > git rev-list --no-walk dc2aa26dca453669fb0294911c421534ec67a235 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins18387498173458328018.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version =2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version =2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version =1.4.0; python_version jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-4.0.0
collected 167 items / 50 errors

==================================== ERRORS ==================================== _________________ ERROR collecting tests/unit/tf/test_core.py __________________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/test_core.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/test_core.py:4: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ________________ ERROR collecting tests/unit/tf/test_loader.py _________________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/test_loader.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/test_loader.py:26: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/test_public_api.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/test_public_api.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/test_public_api.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/blocks/test_cross.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_cross.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_cross.py:20: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/blocks/test_dlrm.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_dlrm.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_dlrm.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) __________ ERROR collecting tests/unit/tf/blocks/test_interactions.py __________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_interactions.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_interactions.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/blocks/test_mlp.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_mlp.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_mlp.py:21: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/blocks/test_optimizer.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/test_optimizer.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/test_optimizer.py:22: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _________ ERROR collecting tests/unit/tf/blocks/retrieval/test_base.py _________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/retrieval/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/retrieval/test_base.py:18: in from merlin.models.tf.blocks.retrieval.base import ItemRetrievalScorer merlin/models/tf/blocks/retrieval/base.py:27: in from merlin.models.tf.models.base import ModelBlock merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _ ERROR collecting tests/unit/tf/blocks/retrieval/test_matrix_factorization.py _ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/retrieval/test_matrix_factorization.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/retrieval/test_matrix_factorization.py:20: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______ ERROR collecting tests/unit/tf/blocks/retrieval/test_two_tower.py _______ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/blocks/retrieval/test_two_tower.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/blocks/retrieval/test_two_tower.py:22: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/core/test_aggregation.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_aggregation.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_aggregation.py:21: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _______________ ERROR collecting tests/unit/tf/core/test_base.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_base.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/core/test_combinators.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_combinators.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_combinators.py:6: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/core/test_encoder.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_encoder.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_encoder.py:6: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/core/test_index.py _______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_index.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_index.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____________ ERROR collecting tests/unit/tf/core/test_prediction.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_prediction.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_prediction.py:7: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/core/test_tabular.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/core/test_tabular.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/core/test_tabular.py:1: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____________ ERROR collecting tests/unit/tf/horovod/test_horovod.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/horovod/test_horovod.py:4: in from tensorflow.keras.utils import set_random_seed E ImportError: cannot import name 'set_random_seed' from 'tensorflow.keras.utils' (/usr/local/lib/python3.8/dist-packages/keras/api/_v2/keras/utils/init.py) ___________ ERROR collecting tests/unit/tf/inputs/test_continuous.py ___________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/inputs/test_continuous.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/inputs/test_continuous.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/inputs/test_embedding.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/inputs/test_embedding.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/inputs/test_embedding.py:23: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____________ ERROR collecting tests/unit/tf/inputs/test_tabular.py _____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/inputs/test_tabular.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/inputs/test_tabular.py:21: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/layers/test_queue.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/layers/test_queue.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/layers/test_queue.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/losses/test_losses.py _____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/losses/test_losses.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/losses/test_losses.py:20: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______ ERROR collecting tests/unit/tf/metrics/test_metrics_popularity.py _______ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/metrics/test_metrics_popularity.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/metrics/test_metrics_popularity.py:20: in from merlin.models.tf.metrics.evaluation import ItemCoverageAt, NoveltyAt, PopularityBiasAt merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _________ ERROR collecting tests/unit/tf/metrics/test_metrics_topk.py __________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/metrics/test_metrics_topk.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/metrics/test_metrics_topk.py:23: in from merlin.models.tf.metrics.topk import ( merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______________ ERROR collecting tests/unit/tf/models/test_base.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_base.py:24: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/models/test_benchmark.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_benchmark.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_benchmark.py:19: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____________ ERROR collecting tests/unit/tf/models/test_ranking.py _____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_ranking.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_ranking.py:22: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/models/test_retrieval.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/models/test_retrieval.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/models/test_retrieval.py:6: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) _____________ ERROR collecting tests/unit/tf/outputs/test_base.py ______________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_base.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_base.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ________ ERROR collecting tests/unit/tf/outputs/test_classification.py _________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_classification.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_classification.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) __________ ERROR collecting tests/unit/tf/outputs/test_contrastive.py __________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_contrastive.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_contrastive.py:19: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) __________ ERROR collecting tests/unit/tf/outputs/test_regression.py ___________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_regression.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_regression.py:18: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ___________ ERROR collecting tests/unit/tf/outputs/test_sampling.py ____________ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/outputs/test_sampling.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/outputs/test_sampling.py:20: in import merlin.models.tf as mm merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ____ ERROR collecting tests/unit/tf/prediction_tasks/test_classification.py ____ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/prediction_tasks/test_classification.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/prediction_tasks/test_classification.py:18: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in from merlin.models.tf.transforms.tensor import ListToDense, ListToSparse merlin/models/tf/transforms/tensor.py:19: in from keras.layers.preprocessing import preprocessing_utils as utils E ImportError: cannot import name 'preprocessing_utils' from 'keras.layers.preprocessing' (/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/init.py) ______ ERROR collecting tests/unit/tf/prediction_tasks/test_multi_task.py ______ ImportError while importing test module '/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/prediction_tasks/test_multi_task.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.8/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/unit/tf/prediction_tasks/test_multi_task.py:6: in import merlin.models.tf as ml merlin/models/tf/init.py:34: in from merlin.models.tf.blocks.dlrm import DLRMBlock merlin/models/tf/blocks/dlrm.py:19: in from merlin.models.tf.blocks.interaction import DotProductInteraction merlin/models/tf/blocks/interaction.py:26: in from merlin.models.tf.inputs.base import InputBlockV2 merlin/models/tf/inputs/base.py:26: in from merlin.models.tf.inputs.embedding import ( merlin/models/tf/inputs/embedding.py:39: in

nvidia-merlin-bot avatar Oct 27 '22 04:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit c21e69de57116371fe6161dcd1e4eaaa6892e32d, no merge conflicts.
Running as SYSTEM
Setting status of c21e69de57116371fe6161dcd1e4eaaa6892e32d to PENDING with url https://10.20.13.93:8080/job/merlin_models/1614/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse c21e69de57116371fe6161dcd1e4eaaa6892e32d^{commit} # timeout=10
Checking out Revision c21e69de57116371fe6161dcd1e4eaaa6892e32d (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c21e69de57116371fe6161dcd1e4eaaa6892e32d # timeout=10
Commit message: "adjust learning rate for multi optimizer"
 > git rev-list --no-walk b046189928ec1e548e972f0522d7b7752aea5106 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins55764075818133344.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 784 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 17%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 23%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 28%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py F [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py F [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/horovod/test_horovod.py .. [ 30%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s....................... [ 50%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 83%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== __________________ test_example_06_defining_own_architecture ___________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f7a95fd6fa0> tmpdir = local('/tmp/pytest-of-jenkins/pytest-5/test_example_06_defining_own_a0')

@testbook(
    REPO_ROOT / "examples/06-Define-your-own-architecture-with-Merlin-Models.ipynb", execute=False
)
def test_example_06_defining_own_architecture(tb, tmpdir):
    tb.inject(
        f"""
        import os
        from unittest.mock import patch
        from merlin.datasets.synthetic import generate_data
        os.environ["DATA_FOLDER"] = "{tmpdir}"
        mock_train, mock_valid = generate_data(
            input="movielens-1m",
            num_rows=1000,
            set_sizes=(0.8, 0.2)
        )
        p1 = patch(
            "merlin.datasets.entertainment.get_movielens",
            return_value=[mock_train, mock_valid]
        )
        p1.start()
        """
    )
  tb.execute()

tests/unit/tf/examples/test_06_advanced_own_architecture.py:30:


/usr/local/lib/python3.8/dist-packages/testbook/client.py:147: in execute super().execute_cell(cell, index) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:85: in wrapped return just_run(coro(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:60: in just_run return loop.run_until_complete(coro) /usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete return future.result() /usr/local/lib/python3.8/dist-packages/nbclient/client.py:1025: in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f7a95fd6fa0> cell = {'cell_type': 'code', 'execution_count': 6, 'id': '932d878e', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': ...y']}], 'source': 'batch = mm.sample_batch(valid, batch_size=5, shuffle=False, include_targets=False)\nbatch["userId"]'} cell_index = 22 exec_reply = {'buffers': [], 'content': {'ename': 'MemoryError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'b473c2e3-4825-48e...e, 'engine': 'b473c2e3-4825-48ec-92ff-41fc941da4b5', 'started': '2022-10-27T19:31:00.297836Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(
        self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
    )
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: E ------------------ E batch = mm.sample_batch(valid, batch_size=5, shuffle=False, include_targets=False) E batch["userId"] E ------------------ E
E [0;31m---------------------------------------------------------------------------[0m E [0;31mMemoryError[0m Traceback (most recent call last) E Cell [0;32mIn [6], line 1[0m E [0;32m----> 1[0m batch [38;5;241m=[39m [43mmm[49m[38;5;241;43m.[39;49m[43msample_batch[49m[43m([49m[43mvalid[49m[43m,[49m[43m [49m[43mbatch_size[49m[38;5;241;43m=[39;49m[38;5;241;43m5[39;49m[43m,[49m[43m [49m[43mshuffle[49m[38;5;241;43m=[39;49m[38;5;28;43;01mFalse[39;49;00m[43m,[49m[43m [49m[43minclude_targets[49m[38;5;241;43m=[39;49m[38;5;28;43;01mFalse[39;49;00m[43m)[49m E [1;32m 2[0m batch[[38;5;124m"[39m[38;5;124muserId[39m[38;5;124m"[39m] E
E File [0;32m~/workspace/merlin_models/models/merlin/models/tf/loader.py:614[0m, in [0;36msample_batch[0;34m(data, batch_size, shuffle, include_targets, to_ragged, to_dense)[0m E [1;32m 611[0m [38;5;28;01mfrom[39;00m [38;5;21;01mmerlin[39;00m[38;5;21;01m.[39;00m[38;5;21;01mmodels[39;00m[38;5;21;01m.[39;00m[38;5;21;01mtf[39;00m[38;5;21;01m.[39;00m[38;5;21;01mtransforms[39;00m[38;5;21;01m.[39;00m[38;5;21;01mtensor[39;00m [38;5;28;01mimport[39;00m ListToDense, ListToRagged E [1;32m 613[0m [38;5;28;01mif[39;00m [38;5;129;01mnot[39;00m [38;5;28misinstance[39m(data, Loader): E [0;32m--> 614[0m data [38;5;241m=[39m [43mLoader[49m[43m([49m[43mdata[49m[43m,[49m[43m [49m[43mbatch_size[49m[38;5;241;43m=[39;49m[43mbatch_size[49m[43m,[49m[43m [49m[43mshuffle[49m[38;5;241;43m=[39;49m[43mshuffle[49m[43m)[49m E [1;32m 616[0m batch [38;5;241m=[39m [38;5;28mnext[39m([38;5;28miter[39m(data)) E [1;32m 617[0m [38;5;66;03m# batch could be of type Prediction, so we can't unpack directly[39;00m E
E File [0;32m~/workspace/merlin_models/models/merlin/models/tf/loader.py:303[0m, in [0;36mLoader.__init__[0;34m(self, paths_or_dataset, batch_size, transform, label_names, feature_columns, cat_names, cont_names, engine, shuffle, seed_fn, buffer_size, device, parts_per_chunk, reader_kwargs, global_size, global_rank, drop_last, sparse_names, sparse_max, multi_label_as_dict, sparse_as_dense, schema)[0m E [1;32m 301[0m global_rank [38;5;241m=[39m global_rank [38;5;129;01mor[39;00m hvd[38;5;241m.[39mrank() E [1;32m 302[0m seed_fn [38;5;241m=[39m seed_fn [38;5;129;01mor[39;00m get_default_hvd_seed_fn() E [0;32m--> 303[0m [43mDataLoader[49m[38;5;241;43m.[39;49m[38;5;21;43m__init__[39;49m[43m([49m E [1;32m 304[0m [43m [49m[38;5;28;43mself[39;49m[43m,[49m E [1;32m 305[0m [43m [49m[43mdataset[49m[43m,[49m E [1;32m 306[0m [43m [49m[43mbatch_size[49m[43m,[49m E [1;32m 307[0m [43m [49m[43mshuffle[49m[43m,[49m E [1;32m 308[0m [43m [49m[43mcat_names[49m[38;5;241;43m=[39;49m[43mcat_names[49m[43m,[49m E [1;32m 309[0m [43m [49m[43mcont_names[49m[38;5;241;43m=[39;49m[43mcont_names[49m[43m,[49m E [1;32m 310[0m [43m [49m[43mlabel_names[49m[38;5;241;43m=[39;49m[43mlabel_names[49m[43m,[49m E [1;32m 311[0m [43m [49m[43mseed_fn[49m[38;5;241;43m=[39;49m[43mseed_fn[49m[43m,[49m E [1;32m 312[0m [43m [49m[43mparts_per_chunk[49m[38;5;241;43m=[39;49m[43mparts_per_chunk[49m[43m,[49m E [1;32m 313[0m [43m [49m[43mdevice[49m[38;5;241;43m=[39;49m[43mdevice[49m[43m,[49m E [1;32m 314[0m [43m [49m[43mglobal_size[49m[38;5;241;43m=[39;49m[43mglobal_size[49m[43m,[49m E [1;32m 315[0m [43m [49m[43mglobal_rank[49m[38;5;241;43m=[39;49m[43mglobal_rank[49m[43m,[49m E [1;32m 316[0m [43m [49m[43mdrop_last[49m[38;5;241;43m=[39;49m[43mdrop_last[49m[43m,[49m E [1;32m 317[0m [43m [49m[43msparse_names[49m[38;5;241;43m=[39;49m[43msparse_names[49m[43m,[49m E [1;32m 318[0m [43m [49m[43msparse_max[49m[38;5;241;43m=[39;49m[43msparse_max[49m[43m,[49m E [1;32m 319[0m [43m [49m[43msparse_as_dense[49m[38;5;241;43m=[39;49m[43msparse_as_dense[49m[43m,[49m E [1;32m 320[0m [43m[49m[43m)[49m E [1;32m 321[0m [38;5;28mself[39m[38;5;241m.[39m_transforms [38;5;241m=[39m [([38;5;124m"[39m[38;5;124mall[39m[38;5;124m"[39m, transform)] [38;5;28;01mif[39;00m transform [38;5;28;01melse[39;00m [] E [1;32m 322[0m [38;5;28mself[39m[38;5;241m.[39mmulti_label_as_dict [38;5;241m=[39m multi_label_as_dict E
E File [0;32m~/workspace/merlin_models/models/merlin/models/loader/backend.py:215[0m, in [0;36mDataLoader.__init__[0;34m(self, dataset, batch_size, shuffle, cat_names, cont_names, label_names, seed_fn, parts_per_chunk, device, global_size, global_rank, drop_last, sparse_names, sparse_max, sparse_as_dense)[0m E [1;32m 213[0m [38;5;28mself[39m[38;5;241m.[39mschema [38;5;241m=[39m _get_dataset_schema(dataset) E [1;32m 214[0m [38;5;66;03m# self.data is ddf format[39;00m E [0;32m--> 215[0m [38;5;28mself[39m[38;5;241m.[39mindices [38;5;241m=[39m [43mcp[49m[38;5;241;43m.[39;49m[43marange[49m[43m([49m[38;5;28;43mself[39;49m[38;5;241;43m.[39;49m[43mdata[49m[38;5;241;43m.[39;49m[43mnpartitions[49m[43m)[49m E [1;32m 216[0m [38;5;28mself[39m[38;5;241m.[39mdrop_last [38;5;241m=[39m drop_last E [1;32m 217[0m [38;5;28mself[39m[38;5;241m.[39mdevice [38;5;241m=[39m (device [38;5;129;01mor[39;00m [38;5;241m0[39m) [38;5;28;01mif[39;00m HAS_GPU [38;5;28;01melse[39;00m [38;5;124m"[39m[38;5;124mcpu[39m[38;5;124m"[39m E
E File [0;32m/usr/local/lib/python3.8/dist-packages/cupy/creation/ranges.py:55[0m, in [0;36marange[0;34m(start, stop, step, dtype)[0m E [1;32m 52[0m [38;5;28;01melse[39;00m: E [1;32m 53[0m [38;5;28;01mreturn[39;00m cupy[38;5;241m.[39marray([start], dtype[38;5;241m=[39mnumpy[38;5;241m.[39mbool) E [0;32m---> 55[0m ret [38;5;241m=[39m [43mcupy[49m[38;5;241;43m.[39;49m[43mempty[49m[43m([49m[43m([49m[43msize[49m[43m,[49m[43m)[49m[43m,[49m[43m [49m[43mdtype[49m[38;5;241;43m=[39;49m[43mdtype[49m[43m)[49m E [1;32m 56[0m typ [38;5;241m=[39m numpy[38;5;241m.[39mdtype(dtype)[38;5;241m.[39mtype E [1;32m 57[0m _arange_ufunc(typ(start), typ(step), ret, dtype[38;5;241m=[39mdtype) E
E File [0;32m/usr/local/lib/python3.8/dist-packages/cupy/_creation/basic.py:22[0m, in [0;36mempty[0;34m(shape, dtype, order)[0m E [1;32m 7[0m [38;5;28;01mdef[39;00m [38;5;21mempty[39m(shape, dtype[38;5;241m=[39m[38;5;28mfloat[39m, order[38;5;241m=[39m[38;5;124m'[39m[38;5;124mC[39m[38;5;124m'[39m): E [1;32m 8[0m [38;5;124;03m"""Returns an array without initializing the elements.[39;00m E [1;32m 9[0m E [1;32m 10[0m [38;5;124;03m Args:[39;00m E [0;32m (...)[0m E [1;32m 20[0m E [1;32m 21[0m [38;5;124;03m """[39;00m E [0;32m---> 22[0m [38;5;28;01mreturn[39;00m [43mcupy[49m[38;5;241;43m.[39;49m[43mndarray[49m[43m([49m[43mshape[49m[43m,[49m[43m [49m[43mdtype[49m[43m,[49m[43m [49m[43morder[49m[38;5;241;43m=[39;49m[43morder[49m[43m)[49m E
E File [0;32mcupy/_core/core.pyx:171[0m, in [0;36mcupy._core.core.ndarray.__init__[0;34m()[0m E
E File [0;32mcupy/cuda/memory.pyx:698[0m, in [0;36mcupy.cuda.memory.alloc[0;34m()[0m E
E File [0;32m/usr/local/lib/python3.8/dist-packages/rmm/rmm.py:226[0m, in [0;36mrmm_cupy_allocator[0;34m(nbytes)[0m E [1;32m 223[0m [38;5;28;01mraise[39;00m [38;5;167;01mModuleNotFoundError[39;00m([38;5;124m"[39m[38;5;124mNo module named [39m[38;5;124m'[39m[38;5;124mcupy[39m[38;5;124m'[39m[38;5;124m"[39m) E [1;32m 225[0m stream [38;5;241m=[39m Stream(obj[38;5;241m=[39mcupy[38;5;241m.[39mcuda[38;5;241m.[39mget_current_stream()) E [0;32m--> 226[0m buf [38;5;241m=[39m [43mlibrmm[49m[38;5;241;43m.[39;49m[43mdevice_buffer[49m[38;5;241;43m.[39;49m[43mDeviceBuffer[49m[43m([49m[43msize[49m[38;5;241;43m=[39;49m[43mnbytes[49m[43m,[49m[43m [49m[43mstream[49m[38;5;241;43m=[39;49m[43mstream[49m[43m)[49m E [1;32m 227[0m dev_id [38;5;241m=[39m [38;5;241m-[39m[38;5;241m1[39m [38;5;28;01mif[39;00m buf[38;5;241m.[39mptr [38;5;28;01melse[39;00m cupy[38;5;241m.[39mcuda[38;5;241m.[39mdevice[38;5;241m.[39mget_device_id() E [1;32m 228[0m mem [38;5;241m=[39m cupy[38;5;241m.[39mcuda[38;5;241m.[39mUnownedMemory( E [1;32m 229[0m ptr[38;5;241m=[39mbuf[38;5;241m.[39mptr, size[38;5;241m=[39mbuf[38;5;241m.[39msize, owner[38;5;241m=[39mbuf, device_id[38;5;241m=[39mdev_id E [1;32m 230[0m ) E
E File [0;32mdevice_buffer.pyx:88[0m, in [0;36mrmm._lib.device_buffer.DeviceBuffer.__cinit__[0;34m()[0m E
E [0;31mMemoryError[0m: std::bad_alloc: out_of_memory: CUDA error at: /opt/rapids/rmm/include/rmm/mr/device/cuda_memory_resource.hpp:70: cudaErrorMemoryAllocation out of memory E MemoryError: std::bad_alloc: out_of_memory: CUDA error at: /opt/rapids/rmm/include/rmm/mr/device/cuda_memory_resource.hpp:70: cudaErrorMemoryAllocation out of memory

/usr/local/lib/python3.8/dist-packages/nbclient/client.py:919: CellExecutionError ----------------------------- Captured stderr call ----------------------------- 2022-10-27 19:30:54.056182: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-10-27 19:30:55.723620: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 0 2022-10-27 19:30:55.723725: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1627 MB memory: -> device: 0, name: Tesla P100-DGXS-16GB, pci bus id: 0000:07:00.0, compute capability: 6.0 2022-10-27 19:30:55.724469: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 1 2022-10-27 19:30:55.724533: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 1480 MB memory: -> device: 1, name: Tesla P100-DGXS-16GB, pci bus id: 0000:08:00.0, compute capability: 6.0 Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib/python3.8/logging/init.py", line 2127, in shutdown h.close() File "/usr/local/lib/python3.8/dist-packages/absl/logging/init.py", line 934, in close self.stream.close() File "/usr/local/lib/python3.8/dist-packages/ipykernel/iostream.py", line 438, in close self.watch_fd_thread.join() AttributeError: 'OutStream' object has no attribute 'watch_fd_thread' _________________ test_usecase_accelerate_training_by_lazyadam _________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f77f0a4f550>

@testbook(
    REPO_ROOT / p,
    timeout=180,
    execute=False,
)
def test_usecase_accelerate_training_by_lazyadam(tb):
    tb.inject(
        """
        import os
        os.environ["NUM_ROWS"] = "1000"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py:22:


/usr/local/lib/python3.8/dist-packages/testbook/client.py:147: in execute super().execute_cell(cell, index) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:85: in wrapped return just_run(coro(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:60: in just_run return loop.run_until_complete(coro) /usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete return future.result() /usr/local/lib/python3.8/dist-packages/nbclient/client.py:1025: in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f77f0a4f550> cell = {'cell_type': 'code', 'execution_count': 7, 'id': '0500ad25-29e0-40c8-85bc-6e3864107c6a', 'metadata': {'execution': {'...e_train_function_4725]']}], 'source': 'model1.compile(optimizer="adam")\nmodel1.fit(train, batch_size=1024, epochs=1)'} cell_index = 12 exec_reply = {'buffers': [], 'content': {'ename': 'ResourceExhaustedError', 'engine_info': {'engine_id': -1, 'engine_uuid': '41f4bb...e, 'engine': '41f4bb4a-2980-44f4-8223-cbcfbdce978a', 'started': '2022-10-27T19:31:30.600445Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(
        self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
    )
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: E ------------------ E model1.compile(optimizer="adam") E model1.fit(train, batch_size=1024, epochs=1) E ------------------ E
E [0;31m---------------------------------------------------------------------------[0m E [0;31mResourceExhaustedError[0m Traceback (most recent call last) E Cell [0;32mIn [7], line 2[0m E [1;32m 1[0m model1[38;5;241m.[39mcompile(optimizer[38;5;241m=[39m[38;5;124m"[39m[38;5;124madam[39m[38;5;124m"[39m) E [0;32m----> 2[0m [43mmodel1[49m[38;5;241;43m.[39;49m[43mfit[49m[43m([49m[43mtrain[49m[43m,[49m[43m [49m[43mbatch_size[49m[38;5;241;43m=[39;49m[38;5;241;43m1024[39;49m[43m,[49m[43m [49m[43mepochs[49m[38;5;241;43m=[39;49m[38;5;241;43m1[39;49m[43m)[49m E
E File [0;32m~/workspace/merlin_models/models/merlin/models/tf/models/base.py:874[0m, in [0;36mBaseModel.fit[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, train_metrics_steps, pre, **kwargs)[0m E [1;32m 871[0m [38;5;28mself[39m[38;5;241m.[39m_reset_compile_cache() E [1;32m 872[0m [38;5;28mself[39m[38;5;241m.[39mtrain_pre [38;5;241m=[39m pre E [0;32m--> 874[0m out [38;5;241m=[39m [38;5;28;43msuper[39;49m[43m([49m[43m)[49m[38;5;241;43m.[39;49m[43mfit[49m[43m([49m[38;5;241;43m[39;49m[38;5;241;43m[39;49m[43mfit_kwargs[49m[43m)[49m E [1;32m 876[0m [38;5;28;01mif[39;00m pre: E [1;32m 877[0m [38;5;28;01mdel[39;00m [38;5;28mself[39m[38;5;241m.[39mtrain_pre E
E File [0;32m/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:67[0m, in [0;36mfilter_traceback..error_handler[0;34m(*args, **kwargs)[0m E [1;32m 65[0m [38;5;28;01mexcept[39;00m [38;5;167;01mException[39;00m [38;5;28;01mas[39;00m e: [38;5;66;03m# pylint: disable=broad-except[39;00m E [1;32m 66[0m filtered_tb [38;5;241m=[39m process_traceback_frames(e[38;5;241m.[39m__traceback_) E [0;32m---> 67[0m [38;5;28;01mraise[39;00m e[38;5;241m.[39mwith_traceback(filtered_tb) [38;5;28;01mfrom[39;00m [38;5;28mNone[39m E [1;32m 68[0m [38;5;28;01mfinally[39;00m: E [1;32m 69[0m [38;5;28;01mdel[39;00m filtered_tb E
E File [0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54[0m, in [0;36mquick_execute[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)[0m E [1;32m 52[0m [38;5;28;01mtry[39;00m: E [1;32m 53[0m ctx[38;5;241m.[39mensure_initialized() E [0;32m---> 54[0m tensors [38;5;241m=[39m pywrap_tfe[38;5;241m.[39mTFE_Py_Execute(ctx[38;5;241m.[39m_handle, device_name, op_name, E [1;32m 55[0m inputs, attrs, num_outputs) E [1;32m 56[0m [38;5;28;01mexcept[39;00m core[38;5;241m.[39m_NotOkStatusException [38;5;28;01mas[39;00m e: E [1;32m 57[0m [38;5;28;01mif[39;00m name [38;5;129;01mis[39;00m [38;5;129;01mnot[39;00m [38;5;28;01mNone[39;00m: E
E [0;31mResourceExhaustedError[0m: Graph execution error: E
E Detected at node 'DistributedAdam/DistributedAdam/update_17/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_21558/3741080137.py", line 2, in E model1.fit(train, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 874, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 691, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_17/mul_1' E Detected at node 'DistributedAdam/DistributedAdam/update_17/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_21558/3741080137.py", line 2, in E model1.fit(train, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 874, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 691, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_17/mul_1' E 2 root error(s) found. E (0) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_17/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E [[StatefulPartitionedCall/cond/pivot_t/_518/_53]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_17/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E 0 successful operations. E 0 derived errors ignored. [Op:__inference_train_function_4725] E ResourceExhaustedError: Graph execution error: E
E Detected at node 'DistributedAdam/DistributedAdam/update_17/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_21558/3741080137.py", line 2, in E model1.fit(train, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 874, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 691, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_17/mul_1' E Detected at node 'DistributedAdam/DistributedAdam/update_17/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_21558/3741080137.py", line 2, in E model1.fit(train, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 874, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 691, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_17/mul_1' E 2 root error(s) found. E (0) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_17/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E [[StatefulPartitionedCall/cond/pivot_t/_518/_53]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_17/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E 0 successful operations. E 0 derived errors ignored. [Op:__inference_train_function_4725]

/usr/local/lib/python3.8/dist-packages/nbclient/client.py:919: CellExecutionError ----------------------------- Captured stderr call ----------------------------- 2022-10-27 19:31:25.832293: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-10-27 19:31:27.977368: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 0 2022-10-27 19:31:27.977475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1627 MB memory: -> device: 0, name: Tesla P100-DGXS-16GB, pci bus id: 0000:07:00.0, compute capability: 6.0 2022-10-27 19:31:27.978159: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 1 2022-10-27 19:31:27.978215: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13875 MB memory: -> device: 1, name: Tesla P100-DGXS-16GB, pci bus id: 0000:08:00.0, compute capability: 6.0 2022-10-27 19:31:42.750219: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 1083564064 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY) Reported by CUDA: Free memory/Total memory: 876150784/17069309952 2022-10-27 19:31:42.750284: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152 InUse: 4616331784 MaxInUse: 4616331784 NumAllocs: 257 MaxAllocSize: 1083564064 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0

2022-10-27 19:31:42.750302: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...; 2022-10-27 19:31:42.750310: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 4 2022-10-27 19:31:42.750317: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 36 2022-10-27 19:31:42.750323: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 8 2022-10-27 19:31:42.750329: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2 2022-10-27 19:31:42.750335: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 9 2022-10-27 19:31:42.750341: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 10 2022-10-27 19:31:42.750347: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 5 2022-10-27 19:31:42.750354: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7 2022-10-27 19:31:42.750360: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5 2022-10-27 19:31:42.750366: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 5 2022-10-27 19:31:42.750372: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1 2022-10-27 19:31:42.750378: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 5 2022-10-27 19:31:42.750384: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 61440, 3 2022-10-27 19:31:42.750390: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 65536, 3 2022-10-27 19:31:42.750396: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 5 2022-10-27 19:31:42.750402: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 5 2022-10-27 19:31:42.750409: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 5 2022-10-27 19:31:42.750414: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 5 2022-10-27 19:31:42.750420: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 3 2022-10-27 19:31:42.750451: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 5 2022-10-27 19:31:42.750459: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 5 2022-10-27 19:31:42.750465: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 5 2022-10-27 19:31:42.750471: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 5 2022-10-27 19:31:42.750477: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 3 2022-10-27 19:31:42.750502: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory 2022-10-27 19:31:42.784911: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 1083564064 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY) Reported by CUDA: Free memory/Total memory: 876150784/17069309952 2022-10-27 19:31:42.784977: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152 InUse: 4616331912 MaxInUse: 4616331912 NumAllocs: 258 MaxAllocSize: 1083564064 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0

2022-10-27 19:31:42.785008: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...; 2022-10-27 19:31:42.785022: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 4 2022-10-27 19:31:42.785033: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 36 2022-10-27 19:31:42.785045: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 8 2022-10-27 19:31:42.785056: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2 2022-10-27 19:31:42.785067: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 10 2022-10-27 19:31:42.785078: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 10 2022-10-27 19:31:42.785090: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 5 2022-10-27 19:31:42.785101: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7 2022-10-27 19:31:42.785112: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5 2022-10-27 19:31:42.785123: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 5 2022-10-27 19:31:42.785135: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1 2022-10-27 19:31:42.785146: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 5 2022-10-27 19:31:42.785157: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 61440, 3 2022-10-27 19:31:42.785168: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 65536, 3 2022-10-27 19:31:42.785180: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 5 2022-10-27 19:31:42.785191: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 5 2022-10-27 19:31:42.785202: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 5 2022-10-27 19:31:42.785213: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 5 2022-10-27 19:31:42.785224: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 3 2022-10-27 19:31:42.785236: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 5 2022-10-27 19:31:42.785247: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 5 2022-10-27 19:31:42.785258: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 5 2022-10-27 19:31:42.785269: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 5 2022-10-27 19:31:42.785310: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 3 2022-10-27 19:31:42.785333: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib/python3.8/logging/init.py", line 2127, in shutdown h.close() File "/usr/local/lib/python3.8/dist-packages/absl/logging/init.py", line 934, in close self.stream.close() File "/usr/local/lib/python3.8/dist-packages/ipykernel/iostream.py", line 438, in close self.watch_fd_thread.join() AttributeError: 'OutStream' object has no attribute 'watch_fd_thread' ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f77104b2eb0> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:874: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:691: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f77a292a8b0> ctx = <tensorflow.python.eager.context.Context object at 0x7f79e43b9190> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[-1.2753616e-04, -1.8883002e-05, -2.6534326e-04, ..., ... 6, 4, 15, 3], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f77426bc040> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:874: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_filefuekikvo.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:691: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f77a292a8b0> ctx = <tensorflow.python.eager.context.Context object at 0x7f79e43b9190> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 691, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file_fmnkrxk.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 4 failed, 768 passed, 12 skipped, 1247 warnings in 1699.86s (0:28:19) ===== Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins5630495270658864023.sh

nvidia-merlin-bot avatar Oct 27 '22 19:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 40f868e85034406ae9862cd334bce0f87107138f, no merge conflicts.
Running as SYSTEM
Setting status of 40f868e85034406ae9862cd334bce0f87107138f to PENDING with url https://10.20.13.93:8080/job/merlin_models/1617/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 40f868e85034406ae9862cd334bce0f87107138f^{commit} # timeout=10
Checking out Revision 40f868e85034406ae9862cd334bce0f87107138f (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 40f868e85034406ae9862cd334bce0f87107138f # timeout=10
Commit message: "adjust learning rate for multi optimizer"
 > git rev-list --no-walk 9d5a3e98785ed8e5a31f6c89075ebf23ae14092a # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins9648869288581653433.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 784 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 17%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 23%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 28%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/horovod/test_horovod.py .. [ 30%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s....................... [ 50%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 83%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f8d3016b7f0> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:874: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:691: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f8d89377850> ctx = <tensorflow.python.eager.context.Context object at 0x7f8fca31ebe0> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[-9.77499585e-05, 2.04114112e-04, 1.73186476e-04, ..., ... 5, 14, 20, 11], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f8d239c8eb0> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:874: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_filete7voj9n.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:691: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f8d89377850> ctx = <tensorflow.python.eager.context.Context object at 0x7f8fca31ebe0> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 691, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filea7o4h9cd.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 2 failed, 770 passed, 12 skipped, 1247 warnings in 1710.41s (0:28:30) ===== Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins8253340787450947030.sh

nvidia-merlin-bot avatar Oct 27 '22 21:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 5ae69de321ee349cb235fbd223ca7f1a761aeab5, no merge conflicts.
Running as SYSTEM
Setting status of 5ae69de321ee349cb235fbd223ca7f1a761aeab5 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1620/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 5ae69de321ee349cb235fbd223ca7f1a761aeab5^{commit} # timeout=10
Checking out Revision 5ae69de321ee349cb235fbd223ca7f1a761aeab5 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5ae69de321ee349cb235fbd223ca7f1a761aeab5 # timeout=10
Commit message: "Merge branch 'main' into tf/horovod-1"
 > git rev-list --no-walk 73bbeb192b26db2bce4c08071ecff3663b3eeeca # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins8196774101509735308.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 784 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 17%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 21%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 23%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 28%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/horovod/test_horovod.py .. [ 30%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ........................ [ 46%] tests/unit/tf/models/test_base.py s....................... [ 50%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py .................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 83%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 85%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 86%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fa7373e4b50> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fa795156df0> ctx = <tensorflow.python.eager.context.Context object at 0x7fa9d5a55ac0> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[ 1.3513258e-05, -9.4084004e-05, 4.0196930e-04, ..., ... 4, 12, 35, 7], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fa73cbe2a90> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_fileivzlit78.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) ../../../.local/lib/python3.8/site-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fa795156df0> ctx = <tensorflow.python.eager.context.Context object at 0x7fa9d5a55ac0> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 699, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/var/jenkins_home/.local/lib/python3.8/site-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file7_0uvnk1.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:294: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 2 failed, 770 passed, 12 skipped, 1247 warnings in 1745.18s (0:29:05) ===== Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins6405806305642127179.sh

nvidia-merlin-bot avatar Oct 27 '22 23:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 2205a838b45b7a0aa58297940126bddcb30bf030, no merge conflicts.
Running as SYSTEM
Setting status of 2205a838b45b7a0aa58297940126bddcb30bf030 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1661/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 2205a838b45b7a0aa58297940126bddcb30bf030^{commit} # timeout=10
Checking out Revision 2205a838b45b7a0aa58297940126bddcb30bf030 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2205a838b45b7a0aa58297940126bddcb30bf030 # timeout=10
Commit message: "Merge branch 'main' into tf/horovod-1"
 > git rev-list --no-walk 9cd56ba8157467d084eac1ff7a49818ccbc52182 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins1829885348623582236.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 786 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 17%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 20%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 23%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 28%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/horovod/test_horovod.py .. [ 30%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 46%] tests/unit/tf/models/test_base.py s....................... [ 50%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py ..................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 83%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fd8cbe3b910> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fd9399f7eb0> ctx = <tensorflow.python.eager.context.Context object at 0x7fdb7b290b50> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[-1.2260208e-04, -2.6607956e-04, 3.1695786e-04, ..., ... 11, 31, 11, 16], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fd8d1ebaac0> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_filel2ax8gqi.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fd9399f7eb0> ctx = <tensorflow.python.eager.context.Context object at 0x7fdb7b290b50> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 699, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_fileax6ihuxu.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 2 failed, 772 passed, 12 skipped, 1247 warnings in 1720.09s (0:28:40) ===== Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins13231161859939646472.sh

nvidia-merlin-bot avatar Oct 31 '22 18:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 3ab142e1f8ac661fdddda99d75fcce0d645659ea, no merge conflicts.
Running as SYSTEM
Setting status of 3ab142e1f8ac661fdddda99d75fcce0d645659ea to PENDING with url https://10.20.13.93:8080/job/merlin_models/1668/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 3ab142e1f8ac661fdddda99d75fcce0d645659ea^{commit} # timeout=10
 > git rev-parse origin/3ab142e1f8ac661fdddda99d75fcce0d645659ea^{commit} # timeout=10
 > git rev-parse 3ab142e1f8ac661fdddda99d75fcce0d645659ea^{commit} # timeout=10
ERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script  : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" 
[merlin_models] $ /bin/bash /tmp/jenkins3375707728801459220.sh

nvidia-merlin-bot avatar Oct 31 '22 20:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit dbe9879cb9b240c4713f92226df778bc38f4c1cf, no merge conflicts.
Running as SYSTEM
Setting status of dbe9879cb9b240c4713f92226df778bc38f4c1cf to PENDING with url https://10.20.13.93:8080/job/merlin_models/1669/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse dbe9879cb9b240c4713f92226df778bc38f4c1cf^{commit} # timeout=10
Checking out Revision dbe9879cb9b240c4713f92226df778bc38f4c1cf (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f dbe9879cb9b240c4713f92226df778bc38f4c1cf # timeout=10
Commit message: "lint"
 > git rev-list --no-walk 9cd56ba8157467d084eac1ff7a49818ccbc52182 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins15925249867076253795.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 786 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 9%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 17%] ..................... [ 20%] tests/unit/tf/blocks/retrieval/test_base.py . [ 20%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 20%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 22%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 22%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 22%] tests/unit/tf/core/test_aggregation.py ......... [ 23%] tests/unit/tf/core/test_base.py .. [ 24%] tests/unit/tf/core/test_combinators.py s.................... [ 26%] tests/unit/tf/core/test_encoder.py .. [ 27%] tests/unit/tf/core/test_index.py ... [ 27%] tests/unit/tf/core/test_prediction.py .. [ 27%] tests/unit/tf/core/test_tabular.py ...... [ 28%] tests/unit/tf/examples/test_01_getting_started.py . [ 28%] tests/unit/tf/examples/test_02_dataschema.py . [ 28%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 28%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 29%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 29%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 29%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 29%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 29%] [ 29%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 29%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 29%] tests/unit/tf/horovod/test_horovod.py .. [ 30%] tests/unit/tf/inputs/test_continuous.py ....... [ 30%] tests/unit/tf/inputs/test_embedding.py ................................. [ 35%] ........ [ 36%] tests/unit/tf/inputs/test_tabular.py .................. [ 38%] tests/unit/tf/layers/test_queue.py .............. [ 40%] tests/unit/tf/losses/test_losses.py ....................... [ 43%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 43%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 46%] tests/unit/tf/models/test_base.py s....................... [ 50%] tests/unit/tf/models/test_benchmark.py .. [ 50%] tests/unit/tf/models/test_ranking.py .................................. [ 54%] tests/unit/tf/models/test_retrieval.py ................................ [ 58%] tests/unit/tf/outputs/test_base.py ...... [ 59%] tests/unit/tf/outputs/test_classification.py ...... [ 60%] tests/unit/tf/outputs/test_contrastive.py .............. [ 61%] tests/unit/tf/outputs/test_regression.py .. [ 62%] tests/unit/tf/outputs/test_sampling.py .... [ 62%] tests/unit/tf/outputs/test_topk.py . [ 62%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 63%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 65%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 65%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 66%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 66%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 67%] tests/unit/tf/transformers/test_block.py ..................... [ 69%] tests/unit/tf/transformers/test_transforms.py .......... [ 71%] tests/unit/tf/transforms/test_bias.py .. [ 71%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 75%] ....................s...... [ 78%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 79%] tests/unit/tf/transforms/test_noise.py ..... [ 80%] tests/unit/tf/transforms/test_sequence.py .................... [ 83%] tests/unit/tf/transforms/test_tensor.py ... [ 83%] tests/unit/tf/utils/test_batch.py .... [ 83%] tests/unit/tf/utils/test_dataset.py .. [ 84%] tests/unit/tf/utils/test_tf_utils.py ..... [ 84%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 86%] tests/unit/torch/block/test_mlp.py . [ 86%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 88%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 90%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f5cb28c2e50> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f5cf862da90> ctx = <tensorflow.python.eager.context.Context object at 0x7f5f3afdfb20> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[-2.32349717e-04, 6.18281541e-04, -5.19154710e-05, ..., ... 12, 11, 6, 99], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f5c9ab29760> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_file_lexxbeo.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f5cf862da90> ctx = <tensorflow.python.eager.context.Context object at 0x7f5f3afdfb20> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 699, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 5 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 60 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 32 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 4 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filetkyvw70x.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 26 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 2 failed, 772 passed, 12 skipped, 1247 warnings in 1721.14s (0:28:41) ===== Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins1602081797512619660.sh

nvidia-merlin-bot avatar Oct 31 '22 21:10 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit af45a77d33ffa860c528ed2bbecb925dfcb573ae, no merge conflicts.
Running as SYSTEM
Setting status of af45a77d33ffa860c528ed2bbecb925dfcb573ae to PENDING with url https://10.20.13.93:8080/job/merlin_models/1685/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse af45a77d33ffa860c528ed2bbecb925dfcb573ae^{commit} # timeout=10
Checking out Revision af45a77d33ffa860c528ed2bbecb925dfcb573ae (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f af45a77d33ffa860c528ed2bbecb925dfcb573ae # timeout=10
Commit message: "add horovodrun to tox.ini"
 > git rev-list --no-walk 5e7c901145d4bd775bbff43e45ced6dd27ab9267 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins4205577680640078135.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+36.gaf45a77d.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.26.5,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.28.5,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-models==0.9.0+36.gaf45a77d,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='659968067'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-q084m_b4
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-q084m_b4
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit eda153c663aa864da66927c7a0a9d4e64c073120
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (4.64.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (21.3)
Requirement already satisfied: betterproto=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (7.0.0)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 6.2 MB/s eta 0:00:00
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (2022.5.1)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (1.10.0)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (0.56.2)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (3.19.5)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+4.geda153c) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+4.geda153c) (1.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (8.1.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (5.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (0.12.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 15.1 MB/s eta 0:00:00
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (6.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.2.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.26.12)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.4)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.4.0)
Collecting setuptools=0.54->merlin-core==0.8.0+4.geda153c) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (0.39.1)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (1.22.4)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+4.geda153c) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+4.geda153c) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+4.geda153c) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+4.geda153c) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+4.geda153c-py3-none-any.whl size=118243 sha256=5037d0a4486870afc91f093429e657d2cb31229910900dcf0638f8983e75b9a5
  Stored in directory: /tmp/pip-ephem-wheel-cache-4xmob4gv/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: setuptools, dask, merlin-core
  Attempting uninstall: setuptools
    Found existing installation: setuptools 65.4.1
    Uninstalling setuptools-65.4.1:
      Successfully uninstalled setuptools-65.4.1
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
myst-nb 0.13.2 requires nbconvert=5.6, but you have nbconvert 7.0.0 which is incompatible.
myst-nb 0.13.2 requires sphinx=3.1, but you have sphinx 5.3.0 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+4.geda153c setuptools-59.8.0
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-h3f0drrr
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-h3f0drrr
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit c0636deadefebcf238935267239406bf7cc6803f
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+1.gc0636dea) (1.8.1)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+1.gc0636dea) (0.6.0+1.g5926fcf)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.64.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (21.3)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.5)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.5.0)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (7.0.0)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.2 MB/s eta 0:00:00
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.5.1)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.10.0)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.56.2)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.19.5)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+1.gc0636dea) (1.22.4)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (8.1.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (5.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.12.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 107.7 MB/s eta 0:00:00
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.2.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.26.12)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.4)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.4.0)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+1.gc0636dea-cp38-cp38-linux_x86_64.whl size=266594 sha256=0a52086b3d732328fa3e3b10eb1c766048eac2109948d9fe57477a66a6c4c587
  Stored in directory: /tmp/pip-ephem-wheel-cache-wnzz00i3/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+1.gc0636dea
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 829 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 13%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 17%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 26%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 27%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s....................... [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 63%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 66%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 68%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 72%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 80%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 83%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7faf0fef9340> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fb06ef3ea30> ctx = <tensorflow.python.eager.context.Context object at 0x7fb2adf0cee0> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[-2.8362469e-05, 5.2537612e-04, 1.2448107e-04, ..., ... 6, 2, 31, 7], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7faf0cff4e20> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:882: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_filebvo1mp_3.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:699: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fb06ef3ea30> ctx = <tensorflow.python.eager.context.Context object at 0x7fb2adf0cee0> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 699, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 26 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file5hjoxsf8.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 11 94% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 421 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 727 78 89% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 122 4 97%

TOTAL 11312 2225 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 2 failed, 815 passed, 12 skipped, 1374 warnings in 2557.80s (0:42:37) ===== ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins8791704016044381731.sh

nvidia-merlin-bot avatar Nov 01 '22 17:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 8661b3e8b88c31e8cfe49a80de804ad06c3ba742, no merge conflicts.
Running as SYSTEM
Setting status of 8661b3e8b88c31e8cfe49a80de804ad06c3ba742 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1686/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 8661b3e8b88c31e8cfe49a80de804ad06c3ba742^{commit} # timeout=10
Checking out Revision 8661b3e8b88c31e8cfe49a80de804ad06c3ba742 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 8661b3e8b88c31e8cfe49a80de804ad06c3ba742 # timeout=10
Commit message: "Merge branch 'main' into tf/horovod-1"
 > git rev-list --no-walk af45a77d33ffa860c528ed2bbecb925dfcb573ae # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins14362490877575458389.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+40.g8661b3e8.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.26.5,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.28.5,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-models==0.9.0+40.g8661b3e8,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='2231272228'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-ldmo939i
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-ldmo939i
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit eda153c663aa864da66927c7a0a9d4e64c073120
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (3.19.5)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (0.56.2)
Requirement already satisfied: betterproto=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (2022.5.1)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (4.64.1)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.2 MB/s eta 0:00:00
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (1.3.5)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (1.10.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (21.3)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (7.0.0)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+4.geda153c) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+4.geda153c) (0.4.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (5.4.1)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.3.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (8.1.3)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 69.9 MB/s eta 0:00:00
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.26.12)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.2.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.4.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (5.9.2)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.7.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (6.2)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (3.1.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (0.39.1)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (1.22.4)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+4.geda153c) (59.8.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+4.geda153c) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (2.8.2)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+4.geda153c) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+4.geda153c) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+4.geda153c) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+4.geda153c-py3-none-any.whl size=118243 sha256=53a89dc72f63c4fcc92c71a241712011751d56e268c8c3c0d72a3b5086740f48
  Stored in directory: /tmp/pip-ephem-wheel-cache-muelfmjt/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+4.geda153c
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-egn0fwz1
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-egn0fwz1
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit c0636deadefebcf238935267239406bf7cc6803f
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+1.gc0636dea) (1.8.1)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+1.gc0636dea) (0.6.0+1.g5926fcf)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.19.5)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.56.2)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.5)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.5.0)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.5.1)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.64.1)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.4 MB/s eta 0:00:00
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.3.5)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.10.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (21.3)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (7.0.0)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+1.gc0636dea) (1.22.4)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.4.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (5.4.1)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.3.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (8.1.3)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 86.8 MB/s eta 0:00:00
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.26.12)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.2.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.4.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (5.9.2)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.7.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.2)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.1.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.39.1)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (59.8.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.8.2)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+1.gc0636dea-cp38-cp38-linux_x86_64.whl size=266594 sha256=89a03569b8abe089bd8214e4beaf0de1f108c23af277cd22958832b2d5aa03f0
  Stored in directory: /tmp/pip-ephem-wheel-cache-zvas20g1/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+1.gc0636dea
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 832 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py F [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 67%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 73%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== _______________ test_usecase_incremental_training_layer_freezing _______________

tb = <testbook.client.TestbookNotebookClient object at 0x7fc7b055d550>

@testbook(
    REPO_ROOT / p,
    timeout=180,
    execute=False,
)
def test_usecase_incremental_training_layer_freezing(tb):
    tb.inject(
        """
        import os
        os.environ["NUM_ROWS"] = "1000"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py:22:


/usr/local/lib/python3.8/dist-packages/testbook/client.py:147: in execute super().execute_cell(cell, index) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:85: in wrapped return just_run(coro(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:60: in just_run return loop.run_until_complete(coro) /usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete return future.result() /usr/local/lib/python3.8/dist-packages/nbclient/client.py:1025: in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7fc7b055d550> cell = {'cell_type': 'code', 'execution_count': 8, 'id': '791e06ec-c0cb-4c0f-9e41-7e5c8fa1dc4e', 'metadata': {'execution': {'...: 'model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01))\nmodel.fit(day_1, batch_size=1024, epochs=1)'} cell_index = 13 exec_reply = {'buffers': [], 'content': {'ename': 'ResourceExhaustedError', 'engine_info': {'engine_id': -1, 'engine_uuid': '4210d4...e, 'engine': '4210d415-e024-49ed-9bdb-88610d749b01', 'started': '2022-11-01T17:44:44.051058Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(
        self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
    )
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: E ------------------ E model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01)) E model.fit(day_1, batch_size=1024, epochs=1) E ------------------ E
E [0;31m---------------------------------------------------------------------------[0m E [0;31mResourceExhaustedError[0m Traceback (most recent call last) E Cell [0;32mIn [8], line 2[0m E [1;32m 1[0m model[38;5;241m.[39mcompile(optimizer[38;5;241m=[39mtf[38;5;241m.[39mkeras[38;5;241m.[39moptimizers[38;5;241m.[39mAdam(learning_rate[38;5;241m=[39m[38;5;241m0.01[39m)) E [0;32m----> 2[0m [43mmodel[49m[38;5;241;43m.[39;49m[43mfit[49m[43m([49m[43mday_1[49m[43m,[49m[43m [49m[43mbatch_size[49m[38;5;241;43m=[39;49m[38;5;241;43m1024[39;49m[43m,[49m[43m [49m[43mepochs[49m[38;5;241;43m=[39;49m[38;5;241;43m1[39;49m[43m)[49m E
E File [0;32m~/workspace/merlin_models/models/merlin/models/tf/models/base.py:899[0m, in [0;36mBaseModel.fit[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, train_metrics_steps, pre, **kwargs)[0m E [1;32m 896[0m [38;5;28mself[39m[38;5;241m.[39m_reset_compile_cache() E [1;32m 897[0m [38;5;28mself[39m[38;5;241m.[39mtrain_pre [38;5;241m=[39m pre E [0;32m--> 899[0m out [38;5;241m=[39m [38;5;28;43msuper[39;49m[43m([49m[43m)[49m[38;5;241;43m.[39;49m[43mfit[49m[43m([49m[38;5;241;43m[39;49m[38;5;241;43m[39;49m[43mfit_kwargs[49m[43m)[49m E [1;32m 901[0m [38;5;28;01mif[39;00m pre: E [1;32m 902[0m [38;5;28;01mdel[39;00m [38;5;28mself[39m[38;5;241m.[39mtrain_pre E
E File [0;32m/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:67[0m, in [0;36mfilter_traceback..error_handler[0;34m(*args, **kwargs)[0m E [1;32m 65[0m [38;5;28;01mexcept[39;00m [38;5;167;01mException[39;00m [38;5;28;01mas[39;00m e: [38;5;66;03m# pylint: disable=broad-except[39;00m E [1;32m 66[0m filtered_tb [38;5;241m=[39m process_traceback_frames(e[38;5;241m.[39m__traceback_) E [0;32m---> 67[0m [38;5;28;01mraise[39;00m e[38;5;241m.[39mwith_traceback(filtered_tb) [38;5;28;01mfrom[39;00m [38;5;28mNone[39m E [1;32m 68[0m [38;5;28;01mfinally[39;00m: E [1;32m 69[0m [38;5;28;01mdel[39;00m filtered_tb E
E File [0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54[0m, in [0;36mquick_execute[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)[0m E [1;32m 52[0m [38;5;28;01mtry[39;00m: E [1;32m 53[0m ctx[38;5;241m.[39mensure_initialized() E [0;32m---> 54[0m tensors [38;5;241m=[39m pywrap_tfe[38;5;241m.[39mTFE_Py_Execute(ctx[38;5;241m.[39m_handle, device_name, op_name, E [1;32m 55[0m inputs, attrs, num_outputs) E [1;32m 56[0m [38;5;28;01mexcept[39;00m core[38;5;241m.[39m_NotOkStatusException [38;5;28;01mas[39;00m e: E [1;32m 57[0m [38;5;28;01mif[39;00m name [38;5;129;01mis[39;00m [38;5;129;01mnot[39;00m [38;5;28;01mNone[39;00m: E
E [0;31mResourceExhaustedError[0m: Graph execution error: E
E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_29346/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_1' E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_29346/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_1' E 2 root error(s) found. E (0) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E [[StatefulPartitionedCall/cond/pivot_t/_562/_53]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E 0 successful operations. E 0 derived errors ignored. [Op:__inference_train_function_4965] E ResourceExhaustedError: Graph execution error: E
E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_29346/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_1' E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_1' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_29346/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 206, in _resource_apply_sparse E m_t = tf.compat.v1.assign(m, m * coefficients['beta_1_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_1' E 2 root error(s) found. E (0) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E [[StatefulPartitionedCall/cond/pivot_t/_562/_53]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_1}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E 0 successful operations. E 0 derived errors ignored. [Op:__inference_train_function_4965]

/usr/local/lib/python3.8/dist-packages/nbclient/client.py:919: CellExecutionError ----------------------------- Captured stderr call ----------------------------- 2022-11-01 17:44:37.083175: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-11-01 17:44:41.241015: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 0 2022-11-01 17:44:41.241126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1627 MB memory: -> device: 0, name: Tesla P100-DGXS-16GB, pci bus id: 0000:07:00.0, compute capability: 6.0 2022-11-01 17:44:41.242044: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 1 2022-11-01 17:44:41.242107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13875 MB memory: -> device: 1, name: Tesla P100-DGXS-16GB, pci bus id: 0000:08:00.0, compute capability: 6.0 2022-11-01 17:44:41.242730: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 2 2022-11-01 17:44:41.242782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 13875 MB memory: -> device: 2, name: Tesla P100-DGXS-16GB, pci bus id: 0000:0e:00.0, compute capability: 6.0 2022-11-01 17:44:41.243423: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 3 2022-11-01 17:44:41.243473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 13875 MB memory: -> device: 3, name: Tesla P100-DGXS-16GB, pci bus id: 0000:0f:00.0, compute capability: 6.0 2022-11-01 17:45:01.689006: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 1083564064 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY) Reported by CUDA: Free memory/Total memory: 608763904/17069309952 2022-11-01 17:45:01.689084: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152 InUse: 4171994984 MaxInUse: 4172055704 NumAllocs: 275 MaxAllocSize: 1083564064 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0

2022-11-01 17:45:01.689110: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...; 2022-11-01 17:45:01.689123: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 4 2022-11-01 17:45:01.689132: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 38 2022-11-01 17:45:01.689141: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 7 2022-11-01 17:45:01.689150: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2 2022-11-01 17:45:01.689159: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 6 2022-11-01 17:45:01.689168: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 9 2022-11-01 17:45:01.689177: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 5 2022-11-01 17:45:01.689185: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7 2022-11-01 17:45:01.689194: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5 2022-11-01 17:45:01.689203: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 5 2022-11-01 17:45:01.689212: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 512, 6 2022-11-01 17:45:01.689220: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1 2022-11-01 17:45:01.689257: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 5 2022-11-01 17:45:01.689269: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 32768, 6 2022-11-01 17:45:01.689278: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 122880, 3 2022-11-01 17:45:01.689287: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 131072, 3 2022-11-01 17:45:01.689295: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 3 2022-11-01 17:45:01.689304: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 3 2022-11-01 17:45:01.689313: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 3 2022-11-01 17:45:01.689322: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 4 2022-11-01 17:45:01.689331: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 3 2022-11-01 17:45:01.689340: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 4 2022-11-01 17:45:01.689349: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 3 2022-11-01 17:45:01.689357: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 4 2022-11-01 17:45:01.689366: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 3 2022-11-01 17:45:01.689375: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 3 2022-11-01 17:45:01.689406: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory 2022-11-01 17:45:01.711751: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 1083564064 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY) Reported by CUDA: Free memory/Total memory: 541655040/17069309952 2022-11-01 17:45:01.711817: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152 InUse: 4247847720 MaxInUse: 4247847720 NumAllocs: 279 MaxAllocSize: 1083564064 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0

2022-11-01 17:45:01.711846: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...; 2022-11-01 17:45:01.711860: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 4 2022-11-01 17:45:01.711872: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 38 2022-11-01 17:45:01.711883: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 7 2022-11-01 17:45:01.711894: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2 2022-11-01 17:45:01.711904: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 6 2022-11-01 17:45:01.711915: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 10 2022-11-01 17:45:01.711926: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 5 2022-11-01 17:45:01.711937: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7 2022-11-01 17:45:01.711960: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5 2022-11-01 17:45:01.711971: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 5 2022-11-01 17:45:01.711983: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 512, 6 2022-11-01 17:45:01.711994: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1 2022-11-01 17:45:01.712005: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 5 2022-11-01 17:45:01.712016: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 32768, 6 2022-11-01 17:45:01.712068: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 122880, 3 2022-11-01 17:45:01.712082: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 131072, 3 2022-11-01 17:45:01.712093: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 4 2022-11-01 17:45:01.712104: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 3 2022-11-01 17:45:01.712115: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 3 2022-11-01 17:45:01.712127: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 4 2022-11-01 17:45:01.712138: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 3 2022-11-01 17:45:01.712149: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 5 2022-11-01 17:45:01.712160: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 3 2022-11-01 17:45:01.712171: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 5 2022-11-01 17:45:01.712182: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 3 2022-11-01 17:45:01.712194: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 3 2022-11-01 17:45:01.712212: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib/python3.8/logging/init.py", line 2127, in shutdown h.close() File "/usr/local/lib/python3.8/dist-packages/absl/logging/init.py", line 934, in close self.stream.close() File "/usr/local/lib/python3.8/dist-packages/ipykernel/iostream.py", line 438, in close self.watch_fd_thread.join() AttributeError: 'OutStream' object has no attribute 'watch_fd_thread' ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fc635970d60> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fc76636df10> ctx = <tensorflow.python.eager.context.Context object at 0x7fca095a1d90> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[-1.64157391e-04, -1.87196449e-04, 2.94752477e-04, ..., ... 6, 20, 4, 30], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fc706cbb100> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_filevzstcyrh.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fc76636df10> ctx = <tensorflow.python.eager.context.Context object at 0x7fca095a1d90> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_fileoh1tcduq.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/distributed/node.py:177: UserWarning: Port 8787 is already in use. Perhaps you already have a cluster running? Hosting the HTTP server on port 36745 instead warnings.warn(

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 11 94% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 421 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 735 78 89% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11351 2230 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 3 failed, 817 passed, 12 skipped, 1377 warnings in 2775.94s (0:46:15) ===== ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins239630927080628428.sh

nvidia-merlin-bot avatar Nov 01 '22 18:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit e01adeb913f553e16ccb809d290760183fe1df17, no merge conflicts.
Running as SYSTEM
Setting status of e01adeb913f553e16ccb809d290760183fe1df17 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1687/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse e01adeb913f553e16ccb809d290760183fe1df17^{commit} # timeout=10
Checking out Revision e01adeb913f553e16ccb809d290760183fe1df17 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e01adeb913f553e16ccb809d290760183fe1df17 # timeout=10
Commit message: "use tensorflow.python.keras instead of keras"
 > git rev-list --no-walk 8661b3e8b88c31e8cfe49a80de804ad06c3ba742 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins9399375147106608195.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+41.ge01adeb9.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.26.5,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.28.5,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-models==0.9.0+41.ge01adeb9,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='377913715'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-8fxdjee8
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-8fxdjee8
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit eda153c663aa864da66927c7a0a9d4e64c073120
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (1.10.0)
Requirement already satisfied: betterproto=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (2022.5.1)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (4.64.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (21.3)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (1.3.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.9 MB/s eta 0:00:00
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (2022.5.0)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (3.19.5)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (7.0.0)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+4.geda153c) (0.56.2)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+4.geda153c) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+4.geda153c) (1.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (5.4.1)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (8.1.3)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.3.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (5.9.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.26.12)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (3.1.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.2.0)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 78.1 MB/s eta 0:00:00
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.4.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (6.2)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+4.geda153c) (59.8.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (1.22.4)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+4.geda153c) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+4.geda153c) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (2022.2.1)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+4.geda153c) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+4.geda153c) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+4.geda153c) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+4.geda153c) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+4.geda153c) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+4.geda153c) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+4.geda153c-py3-none-any.whl size=118243 sha256=4f83068e6af57b04430e85518ea9f7261db581940056d18b7a012d3cc1a57a71
  Stored in directory: /tmp/pip-ephem-wheel-cache-n3bpdsxo/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+4.geda153c
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-tu0jmewi
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-tu0jmewi
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit c0636deadefebcf238935267239406bf7cc6803f
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+1.gc0636dea) (1.8.1)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+1.gc0636dea) (0.6.0+1.g5926fcf)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.10.0)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.5)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.5.1)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.64.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (21.3)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.3.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 5.9 MB/s eta 0:00:00
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.5.0)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.19.5)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (7.0.0)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.56.2)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+1.gc0636dea) (1.22.4)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (5.4.1)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (8.1.3)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.3.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (5.9.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.26.12)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.1.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.2.0)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 28.6 MB/s eta 0:00:00
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.4.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.2)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2022.2.1)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+1.gc0636dea) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+1.gc0636dea-cp38-cp38-linux_x86_64.whl size=266594 sha256=3b6f6de933af9f3d62371fdf261875287e9ef5024b63b82a696433741a65a10d
  Stored in directory: /tmp/pip-ephem-wheel-cache-rj15ncph/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+1.gc0636dea
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 832 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py F [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 67%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 73%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== _______________ test_usecase_incremental_training_layer_freezing _______________

tb = <testbook.client.TestbookNotebookClient object at 0x7fde7a7b9b20>

@testbook(
    REPO_ROOT / p,
    timeout=180,
    execute=False,
)
def test_usecase_incremental_training_layer_freezing(tb):
    tb.inject(
        """
        import os
        os.environ["NUM_ROWS"] = "1000"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py:22:


/usr/local/lib/python3.8/dist-packages/testbook/client.py:147: in execute super().execute_cell(cell, index) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:85: in wrapped return just_run(coro(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/nbclient/util.py:60: in just_run return loop.run_until_complete(coro) /usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete return future.result() /usr/local/lib/python3.8/dist-packages/nbclient/client.py:1025: in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7fde7a7b9b20> cell = {'cell_type': 'code', 'execution_count': 8, 'id': '791e06ec-c0cb-4c0f-9e41-7e5c8fa1dc4e', 'metadata': {'execution': {'...: 'model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01))\nmodel.fit(day_1, batch_size=1024, epochs=1)'} cell_index = 13 exec_reply = {'buffers': [], 'content': {'ename': 'ResourceExhaustedError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'cfce66...e, 'engine': 'cfce66bb-c9be-48e1-bd4b-2bd1c1de4097', 'started': '2022-11-01T19:17:43.669227Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(
        self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
    )
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: E ------------------ E model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01)) E model.fit(day_1, batch_size=1024, epochs=1) E ------------------ E
E [0;31m---------------------------------------------------------------------------[0m E [0;31mResourceExhaustedError[0m Traceback (most recent call last) E Cell [0;32mIn [8], line 2[0m E [1;32m 1[0m model[38;5;241m.[39mcompile(optimizer[38;5;241m=[39mtf[38;5;241m.[39mkeras[38;5;241m.[39moptimizers[38;5;241m.[39mAdam(learning_rate[38;5;241m=[39m[38;5;241m0.01[39m)) E [0;32m----> 2[0m [43mmodel[49m[38;5;241;43m.[39;49m[43mfit[49m[43m([49m[43mday_1[49m[43m,[49m[43m [49m[43mbatch_size[49m[38;5;241;43m=[39;49m[38;5;241;43m1024[39;49m[43m,[49m[43m [49m[43mepochs[49m[38;5;241;43m=[39;49m[38;5;241;43m1[39;49m[43m)[49m E
E File [0;32m~/workspace/merlin_models/models/merlin/models/tf/models/base.py:899[0m, in [0;36mBaseModel.fit[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, train_metrics_steps, pre, **kwargs)[0m E [1;32m 896[0m [38;5;28mself[39m[38;5;241m.[39m_reset_compile_cache() E [1;32m 897[0m [38;5;28mself[39m[38;5;241m.[39mtrain_pre [38;5;241m=[39m pre E [0;32m--> 899[0m out [38;5;241m=[39m [38;5;28;43msuper[39;49m[43m([49m[43m)[49m[38;5;241;43m.[39;49m[43mfit[49m[43m([49m[38;5;241;43m[39;49m[38;5;241;43m[39;49m[43mfit_kwargs[49m[43m)[49m E [1;32m 901[0m [38;5;28;01mif[39;00m pre: E [1;32m 902[0m [38;5;28;01mdel[39;00m [38;5;28mself[39m[38;5;241m.[39mtrain_pre E
E File [0;32m/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:67[0m, in [0;36mfilter_traceback..error_handler[0;34m(*args, **kwargs)[0m E [1;32m 65[0m [38;5;28;01mexcept[39;00m [38;5;167;01mException[39;00m [38;5;28;01mas[39;00m e: [38;5;66;03m# pylint: disable=broad-except[39;00m E [1;32m 66[0m filtered_tb [38;5;241m=[39m process_traceback_frames(e[38;5;241m.[39m__traceback_) E [0;32m---> 67[0m [38;5;28;01mraise[39;00m e[38;5;241m.[39mwith_traceback(filtered_tb) [38;5;28;01mfrom[39;00m [38;5;28mNone[39m E [1;32m 68[0m [38;5;28;01mfinally[39;00m: E [1;32m 69[0m [38;5;28;01mdel[39;00m filtered_tb E
E File [0;32m/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py:54[0m, in [0;36mquick_execute[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)[0m E [1;32m 52[0m [38;5;28;01mtry[39;00m: E [1;32m 53[0m ctx[38;5;241m.[39mensure_initialized() E [0;32m---> 54[0m tensors [38;5;241m=[39m pywrap_tfe[38;5;241m.[39mTFE_Py_Execute(ctx[38;5;241m.[39m_handle, device_name, op_name, E [1;32m 55[0m inputs, attrs, num_outputs) E [1;32m 56[0m [38;5;28;01mexcept[39;00m core[38;5;241m.[39m_NotOkStatusException [38;5;28;01mas[39;00m e: E [1;32m 57[0m [38;5;28;01mif[39;00m name [38;5;129;01mis[39;00m [38;5;129;01mnot[39;00m [38;5;28;01mNone[39;00m: E
E [0;31mResourceExhaustedError[0m: Graph execution error: E
E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_4' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_4131/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 214, in _resource_apply_sparse E v_t = tf.compat.v1.assign(v, v * coefficients['beta_2_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_4' E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_4' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_4131/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 214, in _resource_apply_sparse E v_t = tf.compat.v1.assign(v, v * coefficients['beta_2_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_4' E 2 root error(s) found. E (0) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_4}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E [[StatefulPartitionedCall/cond/pivot_t/_562/_53]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_4}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E 0 successful operations. E 0 derived errors ignored. [Op:__inference_train_function_4965] E ResourceExhaustedError: Graph execution error: E
E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_4' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_4131/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 214, in _resource_apply_sparse E v_t = tf.compat.v1.assign(v, v * coefficients['beta_2_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_4' E Detected at node 'DistributedAdam/DistributedAdam/update_19/mul_4' defined at (most recent call last): E File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main E return _run_code(code, main_globals, None, E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code E exec(code, run_globals) E File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 17, in E app.launch_new_instance() E File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 978, in launch_instance E app.start() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 712, in start E self.io_loop.start() E File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 215, in start E self.asyncio_loop.run_forever() E File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever E self._run_once() E File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once E handle._run() E File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run E self._context.run(self._callback, *self._args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue E await self.process_one() E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 499, in process_one E await dispatch(*args) E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell E await result E File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request E reply_content = await reply_content E File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute E res = shell.run_cell( E File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell E return super().run_cell(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_cell E result = self._run_cell( E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2940, in _run_cell E return runner(coro) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner E coro.send(None) E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3139, in run_cell_async E has_raised = await self.run_ast_nodes(code_ast.body, cell_name, E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3318, in run_ast_nodes E if await self.run_code(code, result, async=asy): E File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3378, in run_code E exec(code_obj, self.user_global_ns, self.user_ns) E File "/tmp/ipykernel_4131/2071866865.py", line 2, in E model.fit(day_1, batch_size=1024, epochs=1) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 899, in fit E out = super().fit(**fit_kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit E tmp_logs = self.train_function(iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize E return self.apply_gradients(grads_and_vars, name=name) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 205, in apply_gradients E results = super(self.class, self).apply_gradients(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 678, in apply_gradients E return tf.internal.distribute.interim.maybe_merge_call( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 723, in _distributed_apply E update_op = distribution.extended.update( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 701, in apply_grad_to_update_var E return self._resource_apply_sparse_duplicate_indices( E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 1326, in _resource_apply_sparse_duplicate_indices E return self._resource_apply_sparse(summed_grad, handle, unique_indices, E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/adam.py", line 214, in _resource_apply_sparse E v_t = tf.compat.v1.assign(v, v * coefficients['beta_2_t'], E Node: 'DistributedAdam/DistributedAdam/update_19/mul_4' E 2 root error(s) found. E (0) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_4}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E [[StatefulPartitionedCall/cond/pivot_t/_562/_53]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E (1) RESOURCE_EXHAUSTED: failed to allocate memory E [[{{node DistributedAdam/DistributedAdam/update_19/mul_4}}]] E Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. E
E 0 successful operations. E 0 derived errors ignored. [Op:__inference_train_function_4965]

/usr/local/lib/python3.8/dist-packages/nbclient/client.py:919: CellExecutionError ----------------------------- Captured stderr call ----------------------------- 2022-11-01 19:17:36.669997: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-11-01 19:17:40.848535: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 0 2022-11-01 19:17:40.848653: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1627 MB memory: -> device: 0, name: Tesla P100-DGXS-16GB, pci bus id: 0000:07:00.0, compute capability: 6.0 2022-11-01 19:17:40.849321: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 1 2022-11-01 19:17:40.849380: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 14500 MB memory: -> device: 1, name: Tesla P100-DGXS-16GB, pci bus id: 0000:08:00.0, compute capability: 6.0 2022-11-01 19:17:40.849939: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 2 2022-11-01 19:17:40.849991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 14500 MB memory: -> device: 2, name: Tesla P100-DGXS-16GB, pci bus id: 0000:0e:00.0, compute capability: 6.0 2022-11-01 19:17:40.851118: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 3 2022-11-01 19:17:40.851192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 14500 MB memory: -> device: 3, name: Tesla P100-DGXS-16GB, pci bus id: 0000:0f:00.0, compute capability: 6.0 2022-11-01 19:18:02.179434: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 1083564064 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY) Reported by CUDA: Free memory/Total memory: 171507712/17069309952 2022-11-01 19:18:02.179497: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152 InUse: 5343155056 MaxInUse: 5343155056 NumAllocs: 278 MaxAllocSize: 1083564064 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0

2022-11-01 19:18:02.179523: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...; 2022-11-01 19:18:02.179533: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 4 2022-11-01 19:18:02.179540: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 38 2022-11-01 19:18:02.179546: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 8 2022-11-01 19:18:02.179552: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2 2022-11-01 19:18:02.179559: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 7 2022-11-01 19:18:02.179565: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 9 2022-11-01 19:18:02.179571: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 4 2022-11-01 19:18:02.179577: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7 2022-11-01 19:18:02.179584: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5 2022-11-01 19:18:02.179590: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 3 2022-11-01 19:18:02.179596: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 512, 6 2022-11-01 19:18:02.179603: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1 2022-11-01 19:18:02.179634: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 3 2022-11-01 19:18:02.179642: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 32768, 6 2022-11-01 19:18:02.179649: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 122880, 3 2022-11-01 19:18:02.179655: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 131072, 3 2022-11-01 19:18:02.179661: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 4 2022-11-01 19:18:02.179668: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 3 2022-11-01 19:18:02.179674: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 4 2022-11-01 19:18:02.179681: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 5 2022-11-01 19:18:02.179687: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 3 2022-11-01 19:18:02.179693: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 5 2022-11-01 19:18:02.179700: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 3 2022-11-01 19:18:02.179706: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 5 2022-11-01 19:18:02.179712: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 3 2022-11-01 19:18:02.179718: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 4 2022-11-01 19:18:02.179745: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory 2022-11-01 19:18:02.191091: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 135407776 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY) Reported by CUDA: Free memory/Total memory: 37289984/17069309952 2022-11-01 19:18:02.191121: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152 InUse: 5459048048 MaxInUse: 5459048048 NumAllocs: 287 MaxAllocSize: 1083564064 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0

2022-11-01 19:18:02.191138: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...; 2022-11-01 19:18:02.191146: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 4 2022-11-01 19:18:02.191152: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 38 2022-11-01 19:18:02.191159: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 8 2022-11-01 19:18:02.191165: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2 2022-11-01 19:18:02.191172: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 10 2022-11-01 19:18:02.191178: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 9 2022-11-01 19:18:02.191184: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 4 2022-11-01 19:18:02.191191: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7 2022-11-01 19:18:02.191197: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5 2022-11-01 19:18:02.191204: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 3 2022-11-01 19:18:02.191210: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 512, 6 2022-11-01 19:18:02.191216: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1 2022-11-01 19:18:02.191223: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 3 2022-11-01 19:18:02.191229: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 32768, 6 2022-11-01 19:18:02.191255: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 122880, 3 2022-11-01 19:18:02.191263: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 131072, 3 2022-11-01 19:18:02.191269: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 4 2022-11-01 19:18:02.191276: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 4 2022-11-01 19:18:02.191282: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 5 2022-11-01 19:18:02.191288: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 5 2022-11-01 19:18:02.191294: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 5 2022-11-01 19:18:02.191301: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 5 2022-11-01 19:18:02.191307: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 5 2022-11-01 19:18:02.191313: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 5 2022-11-01 19:18:02.191319: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 3 2022-11-01 19:18:02.191326: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 4 2022-11-01 19:18:02.191336: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory 2022-11-01 19:18:02.196904: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:288] gpu_async_0 cuMemAllocAsync failed to allocate 135407776 bytes: CUDA error: out of memory (CUDA_ERROR_OUT_OF_MEMORY) Reported by CUDA: Free memory/Total memory: 37289984/17069309952 2022-11-01 19:18:02.196925: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:293] Stats: Limit: 1706033152 InUse: 5459051888 MaxInUse: 5459051888 NumAllocs: 290 MaxAllocSize: 1083564064 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0

2022-11-01 19:18:02.196941: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:56] Histogram of current allocation: (allocation_size_in_bytes, nb_allocation_of_that_sizes), ...; 2022-11-01 19:18:02.196949: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1, 4 2022-11-01 19:18:02.196956: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4, 38 2022-11-01 19:18:02.196962: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 8, 8 2022-11-01 19:18:02.196968: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 40, 2 2022-11-01 19:18:02.196975: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 128, 10 2022-11-01 19:18:02.196981: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 160, 9 2022-11-01 19:18:02.196988: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 192, 5 2022-11-01 19:18:02.196994: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 256, 7 2022-11-01 19:18:02.197000: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 288, 5 2022-11-01 19:18:02.197007: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 480, 4 2022-11-01 19:18:02.197013: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 512, 6 2022-11-01 19:18:02.197020: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1028, 1 2022-11-01 19:18:02.197026: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 3168, 4 2022-11-01 19:18:02.197032: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 32768, 6 2022-11-01 19:18:02.197039: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 122880, 3 2022-11-01 19:18:02.197057: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 131072, 3 2022-11-01 19:18:02.197065: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 584352, 4 2022-11-01 19:18:02.197072: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 823872, 4 2022-11-01 19:18:02.197078: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 4324736, 5 2022-11-01 19:18:02.197085: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 7426048, 5 2022-11-01 19:18:02.197091: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 15401440, 5 2022-11-01 19:18:02.197097: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 18678720, 5 2022-11-01 19:18:02.197104: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 39970560, 5 2022-11-01 19:18:02.197110: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 56589504, 5 2022-11-01 19:18:02.197117: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 135407776, 3 2022-11-01 19:18:02.197123: E tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:59] 1083564064, 4 2022-11-01 19:18:02.197133: W tensorflow/core/framework/op_kernel.cc:1733] RESOURCE_EXHAUSTED: failed to allocate memory Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib/python3.8/logging/init.py", line 2127, in shutdown h.close() File "/usr/local/lib/python3.8/dist-packages/absl/logging/init.py", line 934, in close self.stream.close() File "/usr/local/lib/python3.8/dist-packages/ipykernel/iostream.py", line 438, in close self.watch_fd_thread.join() AttributeError: 'OutStream' object has no attribute 'watch_fd_thread' ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fdd0aec7fa0> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fde4867bca0> ctx = <tensorflow.python.eager.context.Context object at 0x7fe0ebfa4d90> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[-4.4771965e-05, -9.4172006e-05, -3.4997670e-04, ..., ... 2, 1, 29, 11], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7fdcf6f37a60> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_file5ey4kvv1.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7fde4867bca0> ctx = <tensorflow.python.eager.context.Context object at 0x7fe0ebfa4d90> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filebjm5mio0.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 11 94% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 421 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 735 78 89% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11351 2230 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 3 failed, 817 passed, 12 skipped, 1376 warnings in 2695.44s (0:44:55) ===== ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins1264386445288007354.sh

nvidia-merlin-bot avatar Nov 01 '22 19:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit d91fc691db884452362c6663593b540370e5468b, no merge conflicts.
Running as SYSTEM
Setting status of d91fc691db884452362c6663593b540370e5468b to PENDING with url https://10.20.13.93:8080/job/merlin_models/1688/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse d91fc691db884452362c6663593b540370e5468b^{commit} # timeout=10
Checking out Revision d91fc691db884452362c6663593b540370e5468b (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d91fc691db884452362c6663593b540370e5468b # timeout=10
Commit message: "experiment"
 > git rev-list --no-walk e01adeb913f553e16ccb809d290760183fe1df17 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins5289297301698265448.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+41.gd91fc691.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.26.5,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.28.5,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-models==0.9.0+41.gd91fc691,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='3453873510'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-637exx_0
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-637exx_0
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.9 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: betterproto=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 3.5 MB/s eta 0:00:00
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=b6f55cb8190cbbd0777d8979a4404b6336861440dac029e69da6d18263df518b
  Stored in directory: /tmp/pip-ephem-wheel-cache-ylgucjux/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-n46fkco1
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-n46fkco1
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 59579f2c46006fcb22795623ee9400c658166670
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (0.6.0+1.g5926fcf)
Collecting merlin-dataloader>=0.0.2
  Downloading merlin-dataloader-0.0.2.tar.gz (44 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.1/44.1 kB 1.5 MB/s eta 0:00:00
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (1.8.1)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 11.2 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.1)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.10.0)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.5)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.0)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.56.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (21.3)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.5)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.19.5)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (7.0.0)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+2.g59579f2c) (1.22.4)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.4.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.12.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (8.1.3)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.4.1)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.7.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.26.12)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.4.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.9.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.4)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 82.8 MB/s eta 0:00:00
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.1.2)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.39.1)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.12.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.8.2)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.1)
Building wheels for collected packages: nvtabular, merlin-dataloader
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+2.g59579f2c-cp38-cp38-linux_x86_64.whl size=257600 sha256=95f970ebbad3092767abed057c679a2dc7ba279b2504ffc847bd25bb2073c633
  Stored in directory: /tmp/pip-ephem-wheel-cache-7nlq5cfx/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
  Building wheel for merlin-dataloader (pyproject.toml): started
  Building wheel for merlin-dataloader (pyproject.toml): finished with status 'done'
  Created wheel for merlin-dataloader: filename=merlin_dataloader-0.0.2-py3-none-any.whl size=29204 sha256=8901e62eab079defd02aa987f09b0bad269e78f1a3e537804827da2aa8d24268
  Stored in directory: /tmp/pip-ephem-wheel-cache-7nlq5cfx/wheels/76/ef/ed/cb880e3ef5192ec5940e26fd9442247b569fb0cf8602f97137
Successfully built nvtabular merlin-dataloader
Installing collected packages: dask, merlin-dataloader, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-dataloader-0.0.2 nvtabular-1.6.0+2.g59579f2c
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 832 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 67%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 73%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f091c851550> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f0a6a1c2dc0> ctx = <tensorflow.python.eager.context.Context object at 0x7f0d0dfc4dc0> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[ 4.4688393e-05, -2.2715148e-04, 2.7848125e-04, ..., ... 3, 19, 3, 7], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f091cc1a4f0> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_filet_8_f1yq.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f0a6a1c2dc0> ctx = <tensorflow.python.eager.context.Context object at 0x7f0d0dfc4dc0> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file4l4eqvhk.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 11 94% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 421 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 735 76 90% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11351 2228 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 2 failed, 818 passed, 12 skipped, 1376 warnings in 2672.64s (0:44:32) ===== ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins189302217397755973.sh

nvidia-merlin-bot avatar Nov 01 '22 22:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit ccb9197a0b556e609154213e4ea9c7944002f533, no merge conflicts.
Running as SYSTEM
Setting status of ccb9197a0b556e609154213e4ea9c7944002f533 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1689/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse ccb9197a0b556e609154213e4ea9c7944002f533^{commit} # timeout=10
Checking out Revision ccb9197a0b556e609154213e4ea9c7944002f533 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ccb9197a0b556e609154213e4ea9c7944002f533 # timeout=10
Commit message: "don't install horovod in tensorflow-dev"
 > git rev-list --no-walk d91fc691db884452362c6663593b540370e5468b # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins14186775147285548280.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+41.gccb9197a.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.26.5,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.28.5,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader==0.0.2,merlin-models==0.9.0+41.gccb9197a,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='2346471273'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-pquw4nyv
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-pquw4nyv
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 5.0 MB/s eta 0:00:00
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: betterproto=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.0)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 5.8 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=ab2743a6c94741cc899ec97634c50cc0f09c72473c649e72083e4619bc30f8f7
  Stored in directory: /tmp/pip-ephem-wheel-cache-fhht7olr/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-dnd2p553
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-dnd2p553
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 59579f2c46006fcb22795623ee9400c658166670
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+2.g59579f2c) (0.0.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (1.8.1)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (0.6.0+1.g5926fcf)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 6.1 MB/s eta 0:00:00
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (7.0.0)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.56.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (21.3)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.0)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.19.5)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.10.0)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+2.g59579f2c) (1.22.4)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.4.1)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (8.1.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.12.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.7.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.26.12)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.9.2)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.4)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.4.0)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 28.9 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.1.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.2)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.39.1)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.12.0)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (59.8.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+2.g59579f2c-cp38-cp38-linux_x86_64.whl size=257600 sha256=288aac1541689dbd8452b7404e68b2376a8983b85fa547f6b2786d34dd3d9aef
  Stored in directory: /tmp/pip-ephem-wheel-cache-o50u89wn/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+2.g59579f2c
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 832 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 67%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 73%] tests/unit/tf/transforms/test_features.py s..FF......................... [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== ___________________ test_categorical_encoding_in_model[True] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f69e244b2b0> run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1051: in train_function return step_function(self, iterator) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper return func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1860: in _call_flat return self._build_call_outputs(self._inference_function.call(


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f6b25ebbb50> ctx = <tensorflow.python.eager.context.Context object at 0x7f6dc9426df0> args = [<tf.Tensor: shape=(50, 64), dtype=float32, numpy= array([[ 6.1051906e-05, 3.6138197e-04, -1.4250574e-05, ..., ... 3, 3, 4, 11], dtype=int32)>, <tf.Tensor: shape=(2,), dtype=int32, numpy=array([101, 64], dtype=int32)>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError __________________ test_categorical_encoding_in_model[False] ___________________

ecommerce_data = <merlin.io.dataset.Dataset object at 0x7f69da98a430> run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_categorical_encoding_in_model(ecommerce_data: Dataset, run_eagerly):
    schema = ecommerce_data.schema.select_by_name(names=["user_categories", "item_category"])
    branches = {
        "one_hot": mm.CategoryEncoding(schema, is_input=True),
        "features": mm.InputBlock(ecommerce_data.schema),
    }
    body = mm.ParallelBlock(branches, is_input=True).connect(mm.MLPBlock([32]))
    model = mm.Model(body, mm.BinaryClassificationTask("click"))
  testing_utils.model_test(model, ecommerce_data, run_eagerly=run_eagerly)

tests/unit/tf/transforms/test_features.py:52:


merlin/models/tf/utils/testing_utils.py:91: in model_test losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1, **fit_kwargs) merlin/models/tf/models/base.py:899: in fit out = super().fit(**fit_kwargs) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1409: in fit tmp_logs = self.train_function(iterator) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:963: in _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:785: in _initialize self.stateful_fn.get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2480: in get_concrete_function_internal_garbage_collected graph_function, _ = self.maybe_define_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2711: in maybe_define_function graph_function = self.create_graph_function(args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2627: in create_graph_function func_graph_module.func_graph_from_py_func( /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1141: in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:677: in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1127: in autograph_handler raise e.ag_error_metadata.to_exception(e) /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py:1116: in autograph_handler return autograph.converted_call( /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:439: in converted_call result = converted_f(*effective_args, **kwargs) /tmp/autograph_generated_fileuvck0iqx.py:15: in tf__train_function retval = ag.converted_call(ag.ld(step_function), (ag.ld(self), ag.ld(iterator)), None, fscope) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:459: in _call_unconverted return f(*args) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1040: in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:689: in wrapper return converted_call(f, args, kwargs, options=options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:377: in converted_call return _call_unconverted(f, args, kwargs, options) /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:458: in _call_unconverted return f(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1030: in run_step outputs = model.train_step(data) merlin/models/tf/models/base.py:716: in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py:537: in minimize grads_and_vars = self._compute_gradients( /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:115: in _compute_gradients allreduced_grads = self._allreduce(grads, weights) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:185: in _allreduce return __filtered_reduce_grads(grads, vars) /usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py:161: in __filtered_reduce_grads rg = self._allreduce_grads(rg, rv) /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py:141: in error_handler return fn(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:915: in call result = self._call(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py:954: in _call results = self._stateful_fn(*args, **kwds) /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:2453: in call return graph_function._call_flat( /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:1874: in _call_flat flat_outputs = forward_function.call(ctx, args_with_tangents)


self = <tensorflow.python.eager.function._EagerDefinedFunction object at 0x7f6b25ebbb50> ctx = <tensorflow.python.eager.context.Context object at 0x7f6dc9426df0> args = [<tf.Tensor 'gradient_tape/model/sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_16/Resha.../sequential_block_2/parallel_block_1/parallel_block/embedding_features/Gather_13/Const:0' shape=(2,) dtype=int32>, ...] cancellation_manager = None

def call(self, ctx, args, cancellation_manager=None):
  """Calls this function with `args` as inputs.

  `ConcreteFunction` execution respects device annotations only if the
  function won't be compiled with xla.

  Args:
    ctx: a Context object
    args: a list of arguments to supply this function with.
    cancellation_manager: a `CancellationManager` object that can be used to
      cancel function execution.

  Returns:
    The outputs of the function call.

  Raises:
    ValueError: if the number of arguments is incorrect.
    FunctionAlreadyGarbageCollectedError: if the function is no longer
      available to be called because it has been garbage collected.
  """
  if len(args) != len(self.signature.input_arg):
  raise ValueError(
        f"Signature specifies {len(list(self.signature.input_arg))} "
        f"arguments, got: {len(args)}.")

E ValueError: in user code: E
E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function * E return step_function(self, iterator) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function ** E outputs = model.distribute_strategy.run(run_step, args=(data,)) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run E return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica E return self._call_for_each_replica(fn, args, kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step ** E outputs = model.train_step(data) E File "/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/models/base.py", line 716, in train_step E self.optimizer.minimize(loss, self.trainable_variables, tape=tape) E File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize E grads_and_vars = self._compute_gradients( E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 115, in _compute_gradients E allreduced_grads = self._allreduce(grads, weights) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 185, in _allreduce E return __filtered_reduce_grads(grads, vars) E File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/init.py", line 161, in __filtered_reduce_grads E rg = self._allreduce_grads(rg, rv) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler E return fn(*args, **kwargs) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 915, in call E result = self._call(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 954, in _call E results = self._stateful_fn(*args, **kwds) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2453, in call E return graph_function._call_flat( E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1874, in _call_flat E flat_outputs = forward_function.call(ctx, args_with_tangents) E File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 481, in call E raise ValueError( E
E ValueError: Signature specifies 96 arguments, got: 100.

/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py:481: ValueError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:1", shape=(None,), dtype=int32), values=Tensor("mul_2:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:2", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:4", shape=(None,), dtype=int32), values=Tensor("mul_3:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:5", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:7", shape=(None,), dtype=int32), values=Tensor("mul_4:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:8", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:10", shape=(None,), dtype=int32), values=Tensor("mul_5:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:11", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:13", shape=(None,), dtype=int32), values=Tensor("mul_6:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:14", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:16", shape=(None,), dtype=int32), values=Tensor("mul_7:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:17", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:19", shape=(None,), dtype=int32), values=Tensor("mul_8:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:20", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:22", shape=(None,), dtype=int32), values=Tensor("mul_9:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:23", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:25", shape=(None,), dtype=int32), values=Tensor("mul_10:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:26", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:28", shape=(None,), dtype=int32), values=Tensor("mul_11:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:29", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:31", shape=(None,), dtype=int32), values=Tensor("mul_12:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:32", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:34", shape=(None,), dtype=int32), values=Tensor("mul_13:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:35", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:37", shape=(None,), dtype=int32), values=Tensor("mul_14:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:38", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:40", shape=(None,), dtype=int32), values=Tensor("mul_15:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:41", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:43", shape=(None,), dtype=int32), values=Tensor("mul_16:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:44", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:46", shape=(None,), dtype=int32), values=Tensor("mul_17:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:47", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:49", shape=(None,), dtype=int32), values=Tensor("mul_18:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:50", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:52", shape=(None,), dtype=int32), values=Tensor("mul_19:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:53", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:55", shape=(None,), dtype=int32), values=Tensor("mul_20:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:56", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:58", shape=(None,), dtype=int32), values=Tensor("mul_21:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:59", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:61", shape=(None,), dtype=int32), values=Tensor("mul_22:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:62", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:64", shape=(None,), dtype=int32), values=Tensor("mul_23:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:65", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/test_core.py::test_block_context_model_fp16[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("PartitionedCall:67", shape=(None,), dtype=int32), values=Tensor("mul_24:0", shape=(None, 64), dtype=float32), dense_shape=Tensor("PartitionedCall:68", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filegdysrf9s.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/distributed/node.py:177: UserWarning: Port 8787 is already in use. Perhaps you already have a cluster running? Hosting the HTTP server on port 35791 instead warnings.warn(

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 11 94% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 421 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 735 76 90% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11351 2228 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 2 failed, 818 passed, 12 skipped, 1377 warnings in 2683.22s (0:44:43) ===== ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins1293535314731686761.sh

nvidia-merlin-bot avatar Nov 01 '22 23:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 4b1d40d1597e655b4fdb8c9d8045f4abba1a7edf, no merge conflicts.
Running as SYSTEM
Setting status of 4b1d40d1597e655b4fdb8c9d8045f4abba1a7edf to PENDING with url https://10.20.13.93:8080/job/merlin_models/1690/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 4b1d40d1597e655b4fdb8c9d8045f4abba1a7edf^{commit} # timeout=10
Checking out Revision 4b1d40d1597e655b4fdb8c9d8045f4abba1a7edf (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4b1d40d1597e655b4fdb8c9d8045f4abba1a7edf # timeout=10
Commit message: "don't install horovod in tensorflow-dev"
 > git rev-list --no-walk ccb9197a0b556e609154213e4ea9c7944002f533 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins6906627460368229479.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+41.g4b1d40d1.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.26.5,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.28.5,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader==0.0.2,merlin-models==0.9.0+41.g4b1d40d1,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='845786844'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-0ma1qyih
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-0ma1qyih
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 2.5 MB/s eta 0:00:00
Requirement already satisfied: betterproto=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.0)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 1.3 MB/s eta 0:00:00
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=e8bac674c83b9f73f3dbed10b23457d4f68b60ec6615316deda2188a2815f321
  Stored in directory: /tmp/pip-ephem-wheel-cache-fe1c0e7j/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-41ttw32x
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-41ttw32x
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 59579f2c46006fcb22795623ee9400c658166670
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+2.g59579f2c) (0.0.2)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (0.6.0+1.g5926fcf)
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (1.8.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (21.3)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.10.0)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 1.8 MB/s eta 0:00:00
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.5)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.5)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.19.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.1)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (7.0.0)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.56.2)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.0)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+2.g59579f2c) (1.22.4)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.4.3)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.12.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (8.1.3)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 3.5 MB/s eta 0:00:00
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.9.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.1.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.26.12)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.4)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.0)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.4.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.7.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.12.0)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (59.8.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+2.g59579f2c-cp38-cp38-linux_x86_64.whl size=257600 sha256=fd298cd73246da51cbb74e78015888bb9a5df4ac609f50b00153b9b6a01a4d3d
  Stored in directory: /tmp/pip-ephem-wheel-cache-q9hj55eq/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+2.g59579f2c
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 832 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 67%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 73%] tests/unit/tf/transforms/test_features.py s............................. [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_fileksh24oa5.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 12 93% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 421 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 736 93 87% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11352 2246 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ========= 820 passed, 12 skipped, 1353 warnings in 2473.53s (0:41:13) ========== test-gpu run-test: commands[3] | horovodrun -np 2 sh examples/usecases/multi-gpu/hvd_wrapper.sh python -m pytest -m horovod -rxs tests/unit [1,0]:[1m============================= test session starts ==============================[0m [1,0]:platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0 [1,1]:[1m============================= test session starts ==============================[0m [1,1]:platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0 [1,0]:cachedir: .tox/test-gpu/.pytest_cache [1,0]:rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml [1,0]:plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0 [1,0]:[1mcollecting ... [0m[1,1]:cachedir: .tox/test-gpu/.pytest_cache [1,1]:rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml [1,1]:plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0 [1,1]:[1mcollecting ... [0m[1,1]:[1m collecting 34 items [0m[1,0]:[1m collecting 34 items [0m[1,1]:[1m collecting 304 items [0m[1,0]:[1m collecting 318 items [0m[1,1]:[1m collecting 507 items [0m[1,0]:[1m collecting 507 items [0m[1,1]:[1m collected 832 items / 830 deselected / 2 selected [0m [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py [1,0]:[1m collected 832 items / 830 deselected / 2 selected [0m [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py [1,1]:[32m.[0m[1,0]:[32m.[0m[1,1]:[32m.[0m[1,1]:[33m [100%][0m[1,1]: [1,1]: [1,1]:[33m=============================== warnings summary ===============================[0m [1,1]:../../../../../usr/lib/python3/dist-packages/requests/init.py:89 [1,1]: /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! [1,1]: warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. [1,1]: 'nearest': pil_image.NEAREST, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. [1,1]: 'bilinear': pil_image.BILINEAR, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. [1,1]: 'bicubic': pil_image.BICUBIC, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. [1,1]: 'hamming': pil_image.HAMMING, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. [1,1]: 'box': pil_image.BOX, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. [1,1]: 'lanczos': pil_image.LANCZOS, [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_dlrm [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower[1,1]: [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn([1,1]: [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn( [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn([1,1]: [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /tmp/autograph_generated_filey2u9cf4w.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead [1,1]: ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)[1,1]: [1,1]: [1,1]:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [1,1]:[33m=============== [32m2 passed[0m, [33m[1m830 deselected[0m, [33m[1m12 warnings[0m[33m in 39.70s[0m[33m ================[0m [1,0]:[32m.[0m[1,0]:[33m [100%][0m[1,0]: [1,0]: [1,0]:[33m=============================== warnings summary ===============================[0m [1,0]:../../../../../usr/lib/python3/dist-packages/requests/init.py:89 [1,0]: /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! [1,0]: warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. [1,0]: 'nearest': pil_image.NEAREST, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. [1,0]: 'bilinear': pil_image.BILINEAR, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. [1,0]: 'bicubic': pil_image.BICUBIC, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. [1,0]: 'hamming': pil_image.HAMMING, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. [1,0]: 'box': pil_image.BOX, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. [1,0]: 'lanczos': pil_image.LANCZOS, [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_dlrm [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn( [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn( [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn( [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /tmp/autograph_generated_fileotuilvhd.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead [1,0]: ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope) [1,0]: [1,0]:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [1,0]:[33m=============== [32m2 passed[0m, [33m[1m830 deselected[0m, [33m[1m14 warnings[0m[33m in 47.58s[0m[33m ================[0m ___________________________________ summary ____________________________________ test-gpu: commands succeeded congratulations :) Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins6143625228773202383.sh

nvidia-merlin-bot avatar Nov 02 '22 00:11 nvidia-merlin-bot

@marcromeyn @edknv I am started to develop an example and have follow up questions:

  1. We require to initialize a separate dataloader - we do not require it in any other example. Is it required we will do it in this case?
  2. model.fit runs each worker on the full dataset. Therefore, each epoch is Nx larger than single GPU (where N is number of GPUs).
train = Dataset(ddf, schema=train.schema)

    train_loader = mm.Loader(
        train,
        schema=train.schema,
        batch_size=batch_size,
        shuffle=True,
        drop_last=True,
    )

    target_column = train.schema.select_by_tag(Tags.TARGET).column_names[0]

    model = mm.DLRMModel(
        train.schema,
        embedding_dim=16,
        bottom_block=mm.MLPBlock([32, 16]),
        top_block=mm.MLPBlock([32, 16]),
        prediction_tasks=mm.BinaryClassificationTask(target_column),
    )

    opt = tf.keras.optimizers.Adagrad(learning_rate=learning_rate)
    model.compile(optimizer=opt, run_eagerly=False, metrics=[tf.keras.metrics.AUC()])

    losses = model.fit(
        train_loader,
        batch_size=batch_size,
    )

bschifferer avatar Nov 02 '22 12:11 bschifferer

Click to view CI Results
GitHub pull request #825 of commit a43d86178d5b90684b01b6f78c409332bad2b780, no merge conflicts.
Running as SYSTEM
Setting status of a43d86178d5b90684b01b6f78c409332bad2b780 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1695/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse a43d86178d5b90684b01b6f78c409332bad2b780^{commit} # timeout=10
Checking out Revision a43d86178d5b90684b01b6f78c409332bad2b780 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a43d86178d5b90684b01b6f78c409332bad2b780 # timeout=10
Commit message: "Merge branch 'main' into tf/horovod-1"
 > git rev-list --no-walk b8a56dcacbec4e99872a68c7064dd3a910d938ab # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins5238021930127761410.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+43.ga43d8617.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.27.0,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.29.0,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader @ git+https://github.com/NVIDIA-Merlin/dataloader.git@5905283777ff5ebd748a1c91b7c9fde5710ae775,merlin-models==0.9.0+43.ga43d8617,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='1174490925'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-f0yj3802
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-f0yj3802
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.0 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 115.6 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=527036dbe176e8653c253588744a0b2c28030eef3a04c55f542af4b3cc6edf21
  Stored in directory: /tmp/pip-ephem-wheel-cache-n21s7dfo/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-uodorbye
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-uodorbye
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 59579f2c46006fcb22795623ee9400c658166670
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (1.8.1)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+2.g59579f2c) (0.6.0+1.g5926fcf)
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+2.g59579f2c) (0.0.2+1.g5905283)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (7.0.0)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.10.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (21.3)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.5)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.56.2)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.19.5)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.1)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.1 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.64.1)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.5)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.5.0)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+2.g59579f2c) (1.22.4)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.4.3)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (8.1.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.12.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.4)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (5.9.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.2.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.7.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.26.12)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 112.2 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.2)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.4.0)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (59.8.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (0.39.1)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.12.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.8.2)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+2.g59579f2c) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+2.g59579f2c-cp38-cp38-linux_x86_64.whl size=257600 sha256=f958e72061823f30e9ad4e42cee30fe13b2c83978021f02c47c89df9d285e9c8
  Stored in directory: /tmp/pip-ephem-wheel-cache-7zo0lvp7/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+2.g59579f2c
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 832 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 67%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 73%] tests/unit/tf/transforms/test_features.py s............................. [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file2aa9va4d.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 12 93% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 424 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 736 93 87% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11355 2246 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ========= 820 passed, 12 skipped, 1353 warnings in 2472.38s (0:41:12) ========== /usr/local/lib/python3.8/dist-packages/coverage/data.py:130: CoverageWarning: Data file '/var/jenkins_home/workspace/merlin_models/models/.coverage.10.20.17.231.14303.330020' doesn't seem to be a coverage data file: cannot unpack non-iterable NoneType object data.warn(str(exc)) test-gpu run-test: commands[3] | horovodrun -np 2 sh examples/usecases/multi-gpu/hvd_wrapper.sh python -m pytest -m horovod -rxs tests/unit [1,1]:[1m============================= test session starts ==============================[0m [1,1]:platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0 [1,1]:cachedir: .tox/test-gpu/.pytest_cache [1,1]:rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml [1,1]:plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0 [1,1]:[1mcollecting ... [0m[1,0]:[1m============================= test session starts ==============================[0m [1,0]:platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0 [1,0]:cachedir: .tox/test-gpu/.pytest_cache [1,0]:rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml [1,0]:plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0 [1,0]:[1mcollecting ... [0m[1,1]:[1m collecting 34 items [0m[1,0]:[1m collecting 34 items [0m[1,1]:[1m collecting 266 items [0m[1,0]:[1m collecting 266 items [0m[1,1]:[1m collecting 507 items [0m[1,0]:[1m collecting 507 items [0m[1,1]:[1m collected 832 items / 830 deselected / 2 selected [0m [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py [1,0]:[1m collected 832 items / 830 deselected / 2 selected [0m [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py [1,1]:[32m.[0m[1,0]:[32m.[0m[1,1]:[32m.[0m[1,1]:[33m [100%][0m[1,1]: [1,1]: [1,1]:[33m=============================== warnings summary ===============================[0m [1,1]:../../../../../usr/lib/python3/dist-packages/requests/init.py:89 [1,1]: /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! [1,1]: warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "[1,1]: [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. [1,1]: 'nearest': pil_image.NEAREST, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. [1,1]: 'bilinear': pil_image.BILINEAR, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. [1,1]: 'bicubic': pil_image.BICUBIC, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. [1,1]: 'hamming': pil_image.HAMMING, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. [1,1]: 'box': pil_image.BOX, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. [1,1]: 'lanczos': pil_image.LANCZOS, [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_dlrm [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn( [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn( [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn( [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /tmp/autograph_generated_filefcxqgqv.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead [1,1]: ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope) [1,1]: [1,1]:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [1,1]:[33m=============== [32m2 passed[0m, [33m[1m830 deselected[0m, [33m[1m12 warnings[0m[33m in 39.73s[0m[33m ================[0m [1,0]:[32m.[0m[1,0]:[33m [100%][0m[1,0]: [1,0]: [1,0]:[33m=============================== warnings summary ===============================[0m [1,0]:../../../../../usr/lib/python3/dist-packages/requests/init.py:89 [1,0]: /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! [1,0]: warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "[1,0]: [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. [1,0]: 'nearest': pil_image.NEAREST,[1,0]: [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. [1,0]: 'bilinear': pil_image.BILINEAR,[1,0]: [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. [1,0]: 'bicubic': pil_image.BICUBIC,[1,0]: [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. [1,0]: 'hamming': pil_image.HAMMING,[1,0]: [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. [1,0]: 'box': pil_image.BOX,[1,0]: [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. [1,0]: 'lanczos': pil_image.LANCZOS,[1,0]: [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_dlrm [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower[1,0]: [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn([1,0]: [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn([1,0]: [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn( [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /tmp/autograph_generated_filetyrob0xw.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead [1,0]: ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)[1,0]: [1,0]: [1,0]:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [1,0]:[33m=============== [32m2 passed[0m, [33m[1m830 deselected[0m, [33m[1m14 warnings[0m[33m in 47.57s[0m[33m ================[0m ___________________________________ summary ____________________________________ test-gpu: commands succeeded congratulations :) Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins6947648486621723931.sh

nvidia-merlin-bot avatar Nov 02 '22 12:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 7f2e90bc1941132b82f28c72e105496ce3c27e16, no merge conflicts.
Running as SYSTEM
Setting status of 7f2e90bc1941132b82f28c72e105496ce3c27e16 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1708/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 7f2e90bc1941132b82f28c72e105496ce3c27e16^{commit} # timeout=10
 > git rev-parse origin/7f2e90bc1941132b82f28c72e105496ce3c27e16^{commit} # timeout=10
 > git rev-parse 7f2e90bc1941132b82f28c72e105496ce3c27e16^{commit} # timeout=10
ERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script  : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" 
[merlin_models] $ /bin/bash /tmp/jenkins9634914219038834766.sh

nvidia-merlin-bot avatar Nov 02 '22 19:11 nvidia-merlin-bot

  1. We require to initialize a separate dataloader - we do not require it in any other example. Is it required we will do it in this case?

@bschifferer Dataloader is used here to drop the last batch, i.e., drop_last=True, so we can drop the last batch if it's not full. It's not strictly required, so if other examples don't use Loader it might make sense to not use it in the example, but I also think in a distributed setting it makes sense to drop the last batch which might be small.

  1. model.fit runs each worker on the full dataset. Therefore, each epoch is Nx larger than single GPU (where N is number of GPUs).

When I test this, each worker seems to have 1/Nth steps and the epochs are shorter. On a single GPU:

Epoch 1/2
2136/2136 [==============================] - 44s 18ms/step - loss: 0.6853 - auc: 0.5708 - regularization_loss: 0.0000e+00
Epoch 2/2
2136/2136 [==============================] - 39s 18ms/step - loss: 0.6846 - auc: 0.5738 - regularization_loss: 0.0000e+00

On 2 GPUs using horovodrun -np 2:

[1,0]<stdout>:Epoch 1/2
1068/1068 [==============================] - 31s 21ms/step - loss: 0.6850 - auc: 0.5719 - regularization_loss: 0.0000e+00stdout>::
[1,0]<stdout>:Epoch 2/2
1068/1068 [==============================] - 23s 21ms/step - loss: 0.6845 - auc: 0.5739 - regularization_loss: 0.0000e+00stdout>::>:

I added an assertion for testing the step sizes in the unit tests in 6013913.

edknv avatar Nov 02 '22 19:11 edknv

Click to view CI Results
GitHub pull request #825 of commit 601391386295c15777656f23057aaf5603671f4c, no merge conflicts.
Running as SYSTEM
Setting status of 601391386295c15777656f23057aaf5603671f4c to PENDING with url https://10.20.13.93:8080/job/merlin_models/1709/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 601391386295c15777656f23057aaf5603671f4c^{commit} # timeout=10
Checking out Revision 601391386295c15777656f23057aaf5603671f4c (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 601391386295c15777656f23057aaf5603671f4c # timeout=10
Commit message: "add assertions for steps per epoch"
 > git rev-list --no-walk 4a4ad6694cf6f5862265b367192ac24bb1f91dd1 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins9952453542867170322.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/1/merlin-models-0.9.0+44.g60139138.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.27.0,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.29.0,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader @ git+https://github.com/NVIDIA-Merlin/dataloader.git@5905283777ff5ebd748a1c91b7c9fde5710ae775,merlin-models==0.9.0+44.g60139138,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='386210642'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-i658mrg6
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-i658mrg6
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.0)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: betterproto=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.6 MB/s eta 0:00:00
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 13.9 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=3bdf45125712f0a6cfa01038324bb1ecdd2bcd4b871ca4e545daa8d37a29a181
  Stored in directory: /tmp/pip-ephem-wheel-cache-9e9gq8hf/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-oydk1lll
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-oydk1lll
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 8e7edbafd3006f56e73efdc0c01c4445ab57d028
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (1.8.1)
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.0.2+1.g5905283)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.6.0+1.g5926fcf)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.0)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.1)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.64.1)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.19.5)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.9 MB/s eta 0:00:00
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (21.3)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.56.2)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.10.0)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (7.0.0)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+3.g8e7edbaf) (1.22.4)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (8.1.3)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.4.1)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.4.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.26.12)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.4)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.7.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 88.6 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.2)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.12.0)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (59.8.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.2.1)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+3.g8e7edbaf-cp38-cp38-linux_x86_64.whl size=257595 sha256=d720f9741f2210cb7057dead790136be2f658343d45b4e1801d16826e0e41037
  Stored in directory: /tmp/pip-ephem-wheel-cache-dt6l9q63/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+3.g8e7edbaf
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 832 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 6%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 9%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s.................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 27%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 28%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py .. [ 28%] tests/unit/tf/inputs/test_continuous.py ....... [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 40%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py ................................. [ 55%] .......................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 67%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 73%] tests/unit/tf/transforms/test_features.py s............................. [ 76%] ....................s...... [ 79%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 86%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 2 warnings tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 119 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 6 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 85 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:960: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filet8_h5i6h.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:614: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 12 93% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 245 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 424 54 87% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 288 30 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 452 31 93% merlin/models/tf/loader.py 266 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 736 93 87% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 422 34 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 107 2 98% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11355 2246 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ========= 820 passed, 12 skipped, 1353 warnings in 2504.06s (0:41:44) ========== test-gpu run-test: commands[3] | horovodrun -np 2 sh examples/usecases/multi-gpu/hvd_wrapper.sh python -m pytest -m horovod -rxs tests/unit [1,0]:[1m============================= test session starts ==============================[0m [1,0]:platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0 [1,0]:cachedir: .tox/test-gpu/.pytest_cache [1,0]:rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml [1,0]:plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0 [1,0]:[1mcollecting ... [0m[1,1]:[1m============================= test session starts ==============================[0m [1,1]:platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0 [1,1]:cachedir: .tox/test-gpu/.pytest_cache [1,1]:rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml [1,1]:plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0 [1,1]:[1mcollecting ... [0m[1,0]:[1m collecting 34 items [0m[1,1]:[1m collecting 34 items [0m[1,0]:[1m collecting 304 items [0m[1,1]:[1m collecting 304 items [0m[1,1]:[1m collecting 507 items [0m[1,0]:[1m collecting 507 items [0m[1,1]:[1m collected 832 items / 830 deselected / 2 selected [0m [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py [1,0]:[1m collected 832 items / 830 deselected / 2 selected [0m [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py [1,1]:[32m.[0m[1,0]:[32m.[0m[1,1]:[32m.[0m[1,1]:[33m [100%][0m[1,1]: [1,1]: [1,1]:[33m=============================== warnings summary ===============================[0m [1,1]:../../../../../usr/lib/python3/dist-packages/requests/init.py:89 [1,1]: /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! [1,1]: warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "[1,1]: [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. [1,1]: 'nearest': pil_image.NEAREST, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. [1,1]: 'bilinear': pil_image.BILINEAR, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. [1,1]: 'bicubic': pil_image.BICUBIC, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. [1,1]: 'hamming': pil_image.HAMMING, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. [1,1]: 'box': pil_image.BOX, [1,1]: [1,1]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 [1,1]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. [1,1]: 'lanczos': pil_image.LANCZOS, [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_dlrm [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn( [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn([1,1]: [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. [1,1]: warnings.warn([1,1]: [1,1]: [1,1]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,1]: /tmp/autograph_generated_filehqz73j8n.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead [1,1]: ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope) [1,1]: [1,1]:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [1,1]:[33m=============== [32m2 passed[0m, [33m[1m830 deselected[0m, [33m[1m12 warnings[0m[33m in 39.07s[0m[33m ================[0m [1,0]:[32m.[0m[1,0]:[33m [100%][0m[1,0]: [1,0]: [1,0]:[33m=============================== warnings summary ===============================[0m [1,0]:../../../../../usr/lib/python3/dist-packages/requests/init.py:89 [1,0]: /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! [1,0]: warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. [1,0]: 'nearest': pil_image.NEAREST, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. [1,0]: 'bilinear': pil_image.BILINEAR, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. [1,0]: 'bicubic': pil_image.BICUBIC, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. [1,0]: 'hamming': pil_image.HAMMING, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. [1,0]: 'box': pil_image.BOX, [1,0]: [1,0]:../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 [1,0]: /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. [1,0]: 'lanczos': pil_image.LANCZOS, [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_dlrm [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn( [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn( [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. [1,0]: warnings.warn( [1,0]: [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]:tests/unit/tf/horovod/test_horovod.py::test_horovod_multigpu_two_tower [1,0]: /tmp/autograph_generated_filezapgsvu2.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead [1,0]: ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope) [1,0]: [1,0]:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [1,0]:[33m=============== [32m2 passed[0m, [33m[1m830 deselected[0m, [33m[1m14 warnings[0m[33m in 46.93s[0m[33m ================[0m ___________________________________ summary ____________________________________ test-gpu: commands succeeded congratulations :) Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins10121421380471884849.sh

nvidia-merlin-bot avatar Nov 02 '22 19:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 1970a17e4b13a612bb45cf0f7191daae585101cd, no merge conflicts.
Running as SYSTEM
Setting status of 1970a17e4b13a612bb45cf0f7191daae585101cd to PENDING with url https://10.20.13.93:8080/job/merlin_models/1727/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 1970a17e4b13a612bb45cf0f7191daae585101cd^{commit} # timeout=10
Checking out Revision 1970a17e4b13a612bb45cf0f7191daae585101cd (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1970a17e4b13a612bb45cf0f7191daae585101cd # timeout=10
Commit message: "Merge branch 'main' into tf/horovod-1"
 > git rev-list --no-walk 797b7f8e7e6ac548a1bd1c785a2a82873a3bf5aa # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins10868499661271691600.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/2/merlin-models-0.9.0+52.g1970a17e.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.27.1,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.29.1,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader @ git+https://github.com/NVIDIA-Merlin/dataloader.git@5905283777ff5ebd748a1c91b7c9fde5710ae775,merlin-models==0.9.0+52.g1970a17e,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='2447714603'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-tbgnb2_b
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-tbgnb2_b
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.0)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: betterproto=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.1 MB/s eta 0:00:00
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 107.3 MB/s eta 0:00:00
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=4d9ea92499e32c5ef3811d6578af7a2f152c9edb2f0ba10df90f2413753f6d53
  Stored in directory: /tmp/pip-ephem-wheel-cache-7z96oif6/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-1eeb_4sw
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-1eeb_4sw
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 8e7edbafd3006f56e73efdc0c01c4445ab57d028
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.6.0+1.g5926fcf)
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (1.8.1)
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.0.2+1.g5905283)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (7.0.0)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.19.5)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.56.2)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.0)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.1)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.10.0)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.64.1)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.8 MB/s eta 0:00:00
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (21.3)
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+3.g8e7edbaf) (1.22.4)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (8.1.3)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.4.1)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.12.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 109.7 MB/s eta 0:00:00
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.26.12)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.4)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.4.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.39.1)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (59.8.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.8.2)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.52.0)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.1)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.2)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+3.g8e7edbaf-cp38-cp38-linux_x86_64.whl size=257595 sha256=5ae9adf932210b3b38e67dd6196873c8015aaf8a9f643e51160c8214fa1c1f24
  Stored in directory: /tmp/pip-ephem-wheel-cache-twq6ged_/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+3.g8e7edbaf
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 839 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 5%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 8%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s..................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 26%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 27%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py EE. [ 28%] tests/unit/tf/inputs/test_continuous.py ........ [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 41%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py F................................ [ 55%] ........................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 66%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 72%] tests/unit/tf/transforms/test_features.py s............................. [ 76%] .......................s...... [ 80%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 87%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

==================================== ERRORS ==================================== ______________ ERROR at setup of test_horovod_multigpu_dlrm[True] ______________ file /var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py, line 19 @pytest.mark.parametrize("custom_distributed_optimizer", [True, False]) def test_horovod_multigpu_dlrm( E fixture 'dedupe_callback' not found

  available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, criteo_data, dask_client, doctest_namespace, ecommerce_data, monkeypatch, music_streaming_data, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, sequence_testing_data, social_data, tabular_schema, testing_data, testrun_uid, tf_cat_features, tf_clear_session, tf_con_features, tf_masking_inputs, tf_random_seed, tf_ranking_metrics_inputs, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, torch_cat_features, torch_con_features, torch_masking_inputs, torch_ranking_metrics_inputs, torch_seq_prediction_head_inputs, torch_seq_prediction_head_link_to_block, torch_tabular_data, torch_tabular_features, worker_id
  use 'pytest --fixtures [testpath]' for help on them.

/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py:19 _____________ ERROR at setup of test_horovod_multigpu_dlrm[False] ______________ file /var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py, line 19 @pytest.mark.parametrize("custom_distributed_optimizer", [True, False]) def test_horovod_multigpu_dlrm( E fixture 'dedupe_callback' not found

  available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, criteo_data, dask_client, doctest_namespace, ecommerce_data, monkeypatch, music_streaming_data, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, sequence_testing_data, social_data, tabular_schema, testing_data, testrun_uid, tf_cat_features, tf_clear_session, tf_con_features, tf_masking_inputs, tf_random_seed, tf_ranking_metrics_inputs, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, torch_cat_features, torch_con_features, torch_masking_inputs, torch_ranking_metrics_inputs, torch_seq_prediction_head_inputs, torch_seq_prediction_head_link_to_block, torch_tabular_data, torch_tabular_features, worker_id
  use 'pytest --fixtures [testpath]' for help on them.

/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py:19 =================================== FAILURES =================================== _______________________ test_two_tower_shared_embeddings _______________________

def test_two_tower_shared_embeddings():
    train = make_df(
        {
            "user_id": [1, 3, 3, 4, 3, 1, 2, 4, 6, 7, 8, 9] * 100,
            "item_id": [1, 2, 3, 4, 11, 12, 5, 1, 1, 3, 5, 11] * 100,
            "item_id_hist": [
                [1, 3, 10],
                [1, 5],
                [4, 2, 1],
                [1, 2, 3],
                [1],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [1, 11],
            ]
            * 100,
        }
    )

    user_id = ["user_id"] >> nvt.ops.Categorify() >> nvt.ops.TagAsUserID()

    joint_feats = [["item_id_hist", "item_id"]] >> nvt.ops.Categorify()

    item_id = joint_feats["item_id"] >> nvt.ops.TagAsItemID()
    user_feat = joint_feats["item_id_hist"] >> nvt.ops.TagAsUserFeatures()
    outputs = user_id + item_id + user_feat

    train_dataset = Dataset(train)

    workflow = nvt.Workflow(outputs)
  workflow.fit(train_dataset)

tests/unit/tf/models/test_retrieval.py:59:


/usr/local/lib/python3.8/dist-packages/nvtabular/workflow/workflow.py:198: in fit self.executor.fit(ddf, current_phase) /usr/local/lib/python3.8/dist-packages/merlin/dag/executors.py:243: in fit results = dask.compute(stats, scheduler="synchronous")[0] /usr/local/lib/python3.8/dist-packages/dask/base.py:571: in compute results = schedule(dsk, keys, **kwargs) /usr/local/lib/python3.8/dist-packages/dask/local.py:553: in get_sync return get_async( /usr/local/lib/python3.8/dist-packages/dask/local.py:496: in get_async for key, res_info, failed in queue_get(queue).result(): /usr/lib/python3.8/concurrent/futures/_base.py:437: in result return self.__get_result() /usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result raise self._exception /usr/local/lib/python3.8/dist-packages/dask/local.py:538: in submit fut.set_result(fn(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in batch_execute_tasks return [execute_task(*a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in return [execute_task(a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:225: in execute_task result = pack_exception(e, dumps) /usr/local/lib/python3.8/dist-packages/dask/local.py:220: in execute_task result = _execute_task(task, data) /usr/local/lib/python3.8/dist-packages/dask/core.py:119: in _execute_task return func((_execute_task(a, cache) for a in args)) /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/nvtabular/ops/categorify.py:760: in _top_level_groupby df_gb[cat_col_selector_str] = _concat( /usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:119: in _concat else methods.concat(args2, uniform=True, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/dask/dataframe/dispatch.py:61: in concat return func( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/dask_cudf/backends.py:274: in concat_cudf return cudf.concat(dfs, axis=axis, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/cudf/core/reshape.py:410: in concat return cudf.Series._concat( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/cudf/core/series.py:1270: in _concat common_dtype = find_common_type([obj.dtype for obj in objs]) /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:575: in find_common_type td_dtypes = set( /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:576: in filter(lambda t: pd.api.types.is_timedelta64_dtype(t), dtypes) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:419: in is_timedelta64_dtype return _is_dtype_type(arr_or_dtype, classes(np.timedelta64)) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:1619: in _is_dtype_type return condition(tipo)


tipo =

return lambda tipo: issubclass(tipo, klasses) E TypeError: issubclass() arg 1 must be a class

/usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:146: TypeError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 120 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 86 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:968: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file4ks__f2o.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:602: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 60 24% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 12 93% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 241 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 426 53 88% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 280 29 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 458 31 93% merlin/models/tf/loader.py 268 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 751 102 86% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 435 36 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 158 13 92% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11432 2277 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. = 1 failed, 824 passed, 12 skipped, 1358 warnings, 2 errors in 2470.72s (0:41:10) = ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins13763490154768572493.sh

nvidia-merlin-bot avatar Nov 03 '22 17:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 2f75857a5aa608e9d1a9b32438b6afd7acd9748e, no merge conflicts.
Running as SYSTEM
Setting status of 2f75857a5aa608e9d1a9b32438b6afd7acd9748e to PENDING with url https://10.20.13.93:8080/job/merlin_models/1729/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 2f75857a5aa608e9d1a9b32438b6afd7acd9748e^{commit} # timeout=10
Checking out Revision 2f75857a5aa608e9d1a9b32438b6afd7acd9748e (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2f75857a5aa608e9d1a9b32438b6afd7acd9748e # timeout=10
Commit message: "Merge branch 'main' into tf/horovod-1"
 > git rev-list --no-walk fd096446573245273d09156272086bc6481ec4e6 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins13286830565025353949.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/2/merlin-models-0.9.0+54.g2f75857a.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.27.1,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.29.1,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader @ git+https://github.com/NVIDIA-Merlin/dataloader.git@5905283777ff5ebd748a1c91b7c9fde5710ae775,merlin-models==0.9.0+54.g2f75857a,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='2433921918'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-gw5dcw6c
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-gw5dcw6c
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.0)
Requirement already satisfied: betterproto=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.0 MB/s eta 0:00:00
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 76.4 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=218bc1590193356bad9e01c309e54e7159c1e10721655900e1f6fb942bc9ea15
  Stored in directory: /tmp/pip-ephem-wheel-cache-k7gz7eft/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-r5vqxohf
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-r5vqxohf
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 8e7edbafd3006f56e73efdc0c01c4445ab57d028
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.0.2+1.g5905283)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.6.0+1.g5926fcf)
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (1.8.1)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.19.5)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.64.1)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (7.0.0)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.10.0)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.56.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (21.3)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.0)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.6 MB/s eta 0:00:00
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+3.g8e7edbaf) (1.22.4)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.4.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.12.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.4.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (8.1.3)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.7.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.9.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.26.12)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.4)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 82.0 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.2)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.4.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.12.0)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (59.8.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+3.g8e7edbaf-cp38-cp38-linux_x86_64.whl size=257595 sha256=3df6e801d1265d1dac8dd1dd76ffa848758e6ac22e3c347c51706e790a6defd5
  Stored in directory: /tmp/pip-ephem-wheel-cache-zbpxc6pc/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+3.g8e7edbaf
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 839 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 5%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 8%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s..................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 26%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 27%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py EE. [ 28%] tests/unit/tf/inputs/test_continuous.py ........ [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 41%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py F................................ [ 55%] ........................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 66%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 72%] tests/unit/tf/transforms/test_features.py s............................. [ 76%] .......................s...... [ 80%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 87%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

==================================== ERRORS ==================================== ______________ ERROR at setup of test_horovod_multigpu_dlrm[True] ______________ file /var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py, line 19 @pytest.mark.parametrize("custom_distributed_optimizer", [True, False]) def test_horovod_multigpu_dlrm( E fixture 'dedupe_callback' not found

  available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, criteo_data, dask_client, doctest_namespace, ecommerce_data, monkeypatch, music_streaming_data, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, sequence_testing_data, social_data, tabular_schema, testing_data, testrun_uid, tf_cat_features, tf_clear_session, tf_con_features, tf_masking_inputs, tf_random_seed, tf_ranking_metrics_inputs, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, torch_cat_features, torch_con_features, torch_masking_inputs, torch_ranking_metrics_inputs, torch_seq_prediction_head_inputs, torch_seq_prediction_head_link_to_block, torch_tabular_data, torch_tabular_features, worker_id
  use 'pytest --fixtures [testpath]' for help on them.

/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py:19 _____________ ERROR at setup of test_horovod_multigpu_dlrm[False] ______________ file /var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py, line 19 @pytest.mark.parametrize("custom_distributed_optimizer", [True, False]) def test_horovod_multigpu_dlrm( E fixture 'dedupe_callback' not found

  available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, criteo_data, dask_client, doctest_namespace, ecommerce_data, monkeypatch, music_streaming_data, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, sequence_testing_data, social_data, tabular_schema, testing_data, testrun_uid, tf_cat_features, tf_clear_session, tf_con_features, tf_masking_inputs, tf_random_seed, tf_ranking_metrics_inputs, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, torch_cat_features, torch_con_features, torch_masking_inputs, torch_ranking_metrics_inputs, torch_seq_prediction_head_inputs, torch_seq_prediction_head_link_to_block, torch_tabular_data, torch_tabular_features, worker_id
  use 'pytest --fixtures [testpath]' for help on them.

/var/jenkins_home/workspace/merlin_models/models/tests/unit/tf/horovod/test_horovod.py:19 =================================== FAILURES =================================== _______________________ test_two_tower_shared_embeddings _______________________

def test_two_tower_shared_embeddings():
    train = make_df(
        {
            "user_id": [1, 3, 3, 4, 3, 1, 2, 4, 6, 7, 8, 9] * 100,
            "item_id": [1, 2, 3, 4, 11, 12, 5, 1, 1, 3, 5, 11] * 100,
            "item_id_hist": [
                [1, 3, 10],
                [1, 5],
                [4, 2, 1],
                [1, 2, 3],
                [1],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [1, 11],
            ]
            * 100,
        }
    )

    user_id = ["user_id"] >> nvt.ops.Categorify() >> nvt.ops.TagAsUserID()

    joint_feats = [["item_id_hist", "item_id"]] >> nvt.ops.Categorify()

    item_id = joint_feats["item_id"] >> nvt.ops.TagAsItemID()
    user_feat = joint_feats["item_id_hist"] >> nvt.ops.TagAsUserFeatures()
    outputs = user_id + item_id + user_feat

    train_dataset = Dataset(train)

    workflow = nvt.Workflow(outputs)
  workflow.fit(train_dataset)

tests/unit/tf/models/test_retrieval.py:59:


/usr/local/lib/python3.8/dist-packages/nvtabular/workflow/workflow.py:198: in fit self.executor.fit(ddf, current_phase) /usr/local/lib/python3.8/dist-packages/merlin/dag/executors.py:243: in fit results = dask.compute(stats, scheduler="synchronous")[0] /usr/local/lib/python3.8/dist-packages/dask/base.py:571: in compute results = schedule(dsk, keys, **kwargs) /usr/local/lib/python3.8/dist-packages/dask/local.py:553: in get_sync return get_async( /usr/local/lib/python3.8/dist-packages/dask/local.py:496: in get_async for key, res_info, failed in queue_get(queue).result(): /usr/lib/python3.8/concurrent/futures/_base.py:437: in result return self.__get_result() /usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result raise self._exception /usr/local/lib/python3.8/dist-packages/dask/local.py:538: in submit fut.set_result(fn(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in batch_execute_tasks return [execute_task(*a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in return [execute_task(a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:225: in execute_task result = pack_exception(e, dumps) /usr/local/lib/python3.8/dist-packages/dask/local.py:220: in execute_task result = _execute_task(task, data) /usr/local/lib/python3.8/dist-packages/dask/core.py:119: in _execute_task return func((_execute_task(a, cache) for a in args)) /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/nvtabular/ops/categorify.py:760: in _top_level_groupby df_gb[cat_col_selector_str] = _concat( /usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:119: in _concat else methods.concat(args2, uniform=True, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/dask/dataframe/dispatch.py:61: in concat return func( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/dask_cudf/backends.py:274: in concat_cudf return cudf.concat(dfs, axis=axis, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/cudf/core/reshape.py:410: in concat return cudf.Series._concat( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/cudf/core/series.py:1270: in _concat common_dtype = find_common_type([obj.dtype for obj in objs]) /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:575: in find_common_type td_dtypes = set( /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:576: in filter(lambda t: pd.api.types.is_timedelta64_dtype(t), dtypes) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:419: in is_timedelta64_dtype return _is_dtype_type(arr_or_dtype, classes(np.timedelta64)) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:1619: in _is_dtype_type return condition(tipo)


tipo =

return lambda tipo: issubclass(tipo, klasses) E TypeError: issubclass() arg 1 must be a class

/usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:146: TypeError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 120 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 86 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:968: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_file8q_njty.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag_.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:602: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 60 24% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 12 93% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 241 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 426 53 88% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 280 29 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 458 31 93% merlin/models/tf/loader.py 268 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 751 102 86% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 435 36 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 158 13 92% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11432 2277 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. = 1 failed, 824 passed, 12 skipped, 1358 warnings, 2 errors in 2463.07s (0:41:03) = /usr/local/lib/python3.8/dist-packages/coverage/data.py:130: CoverageWarning: Data file '/var/jenkins_home/workspace/merlin_models/models/.coverage.10.20.17.231.26436.904643' doesn't seem to be a coverage data file: cannot unpack non-iterable NoneType object data._warn(str(exc)) ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins7444739918330918454.sh

nvidia-merlin-bot avatar Nov 03 '22 18:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 6f9561059fa7bd060739641ab31baed21d3cd836, no merge conflicts.
Running as SYSTEM
Setting status of 6f9561059fa7bd060739641ab31baed21d3cd836 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1730/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 6f9561059fa7bd060739641ab31baed21d3cd836^{commit} # timeout=10
Checking out Revision 6f9561059fa7bd060739641ab31baed21d3cd836 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6f9561059fa7bd060739641ab31baed21d3cd836 # timeout=10
Commit message: "remove unused fixture in test"
 > git rev-list --no-walk 2f75857a5aa608e9d1a9b32438b6afd7acd9748e # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins1054064972615331925.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/2/merlin-models-0.9.0+55.g6f956105.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.27.1,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.29.1,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader @ git+https://github.com/NVIDIA-Merlin/dataloader.git@5905283777ff5ebd748a1c91b7c9fde5710ae775,merlin-models==0.9.0+55.g6f956105,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='2135462819'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-9gb3qedj
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-9gb3qedj
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (21.3)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.0)
Requirement already satisfied: betterproto=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.1 MB/s eta 0:00:00
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 40.4 MB/s eta 0:00:00
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=f0380cd94d20d8947d4570542f92167ec5d1d3f2877e21bda137f5f84b6a48f7
  Stored in directory: /tmp/pip-ephem-wheel-cache-4lk29uvh/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-37ur6t2t
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-37ur6t2t
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 8e7edbafd3006f56e73efdc0c01c4445ab57d028
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.6.0+1.g5926fcf)
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (1.8.1)
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.0.2+1.g5905283)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.5)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.19.5)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (7.0.0)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.10.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (21.3)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.0)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.5)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.56.2)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.9 MB/s eta 0:00:00
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+3.g8e7edbaf) (1.22.4)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.4.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (8.1.3)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.12.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.4.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.7.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.4)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.9.2)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 110.9 MB/s eta 0:00:00
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.26.12)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.1.2)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.12.0)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.39.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+3.g8e7edbaf-cp38-cp38-linux_x86_64.whl size=257595 sha256=6a9332500add4c988e67dbd86c09710d974c27f92a7fd538d626f6730c8233ce
  Stored in directory: /tmp/pip-ephem-wheel-cache-j1eb299t/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+3.g8e7edbaf
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 839 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 5%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 8%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s..................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 26%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 27%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py ... [ 28%] tests/unit/tf/inputs/test_continuous.py ........ [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 41%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py F................................ [ 55%] ........................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 66%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 72%] tests/unit/tf/transforms/test_features.py s............................. [ 76%] .......................s...... [ 80%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 87%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== _______________________ test_two_tower_shared_embeddings _______________________

def test_two_tower_shared_embeddings():
    train = make_df(
        {
            "user_id": [1, 3, 3, 4, 3, 1, 2, 4, 6, 7, 8, 9] * 100,
            "item_id": [1, 2, 3, 4, 11, 12, 5, 1, 1, 3, 5, 11] * 100,
            "item_id_hist": [
                [1, 3, 10],
                [1, 5],
                [4, 2, 1],
                [1, 2, 3],
                [1],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [1, 11],
            ]
            * 100,
        }
    )

    user_id = ["user_id"] >> nvt.ops.Categorify() >> nvt.ops.TagAsUserID()

    joint_feats = [["item_id_hist", "item_id"]] >> nvt.ops.Categorify()

    item_id = joint_feats["item_id"] >> nvt.ops.TagAsItemID()
    user_feat = joint_feats["item_id_hist"] >> nvt.ops.TagAsUserFeatures()
    outputs = user_id + item_id + user_feat

    train_dataset = Dataset(train)

    workflow = nvt.Workflow(outputs)
  workflow.fit(train_dataset)

tests/unit/tf/models/test_retrieval.py:59:


/usr/local/lib/python3.8/dist-packages/nvtabular/workflow/workflow.py:198: in fit self.executor.fit(ddf, current_phase) /usr/local/lib/python3.8/dist-packages/merlin/dag/executors.py:243: in fit results = dask.compute(stats, scheduler="synchronous")[0] /usr/local/lib/python3.8/dist-packages/dask/base.py:571: in compute results = schedule(dsk, keys, **kwargs) /usr/local/lib/python3.8/dist-packages/dask/local.py:553: in get_sync return get_async( /usr/local/lib/python3.8/dist-packages/dask/local.py:496: in get_async for key, res_info, failed in queue_get(queue).result(): /usr/lib/python3.8/concurrent/futures/_base.py:437: in result return self.__get_result() /usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result raise self._exception /usr/local/lib/python3.8/dist-packages/dask/local.py:538: in submit fut.set_result(fn(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in batch_execute_tasks return [execute_task(*a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in return [execute_task(a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:225: in execute_task result = pack_exception(e, dumps) /usr/local/lib/python3.8/dist-packages/dask/local.py:220: in execute_task result = _execute_task(task, data) /usr/local/lib/python3.8/dist-packages/dask/core.py:119: in _execute_task return func((_execute_task(a, cache) for a in args)) /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/nvtabular/ops/categorify.py:760: in _top_level_groupby df_gb[cat_col_selector_str] = _concat( /usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:119: in _concat else methods.concat(args2, uniform=True, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/dask/dataframe/dispatch.py:61: in concat return func( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/dask_cudf/backends.py:274: in concat_cudf return cudf.concat(dfs, axis=axis, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/cudf/core/reshape.py:410: in concat return cudf.Series._concat( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/cudf/core/series.py:1270: in _concat common_dtype = find_common_type([obj.dtype for obj in objs]) /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:575: in find_common_type td_dtypes = set( /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:576: in filter(lambda t: pd.api.types.is_timedelta64_dtype(t), dtypes) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:419: in is_timedelta64_dtype return _is_dtype_type(arr_or_dtype, classes(np.timedelta64)) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:1619: in _is_dtype_type return condition(tipo)


tipo =

return lambda tipo: issubclass(tipo, klasses) E TypeError: issubclass() arg 1 must be a class

/usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:146: TypeError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 120 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 86 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:968: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filem6tavvg8.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:602: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 12 93% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 241 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 426 53 88% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 280 29 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 458 31 93% merlin/models/tf/loader.py 268 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 751 102 86% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 435 36 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 158 13 92% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11432 2266 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 1 failed, 826 passed, 12 skipped, 1358 warnings in 2586.70s (0:43:06) ===== ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins15584822063267809269.sh

nvidia-merlin-bot avatar Nov 03 '22 19:11 nvidia-merlin-bot

Click to view CI Results
GitHub pull request #825 of commit 2a56d07ca916db998ec4dbac85c057c2497ab284, no merge conflicts.
Running as SYSTEM
Setting status of 2a56d07ca916db998ec4dbac85c057c2497ab284 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1736/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/825/*:refs/remotes/origin/pr/825/* # timeout=10
 > git rev-parse 2a56d07ca916db998ec4dbac85c057c2497ab284^{commit} # timeout=10
Checking out Revision 2a56d07ca916db998ec4dbac85c057c2497ab284 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2a56d07ca916db998ec4dbac85c057c2497ab284 # timeout=10
Commit message: "Merge branch 'main' into tf/horovod-1"
 > git rev-list --no-walk 2049ce7c429abc1e30911ef45e34716d84eb8bf8 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins13032746952146454717.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: traitlets>=5.2.2 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (5.4.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
GLOB sdist-make: /var/jenkins_home/workspace/merlin_models/models/setup.py
test-gpu inst-nodeps: /var/jenkins_home/workspace/merlin_models/models/.tox/.tmp/package/2/merlin-models-0.9.0+57.g2a56d07c.zip
test-gpu installed: absl-py==1.2.0,aiohttp==3.8.1,aiosignal==1.2.0,alabaster==0.7.12,alembic==1.8.1,anyio==3.6.1,appdirs==1.4.4,argon2-cffi==21.3.0,argon2-cffi-bindings==21.2.0,astroid==2.5.6,asttokens==2.0.8,astunparse==1.6.3,asv==0.5.1,asvdb==0.4.2,async-timeout==4.0.2,attrs==22.1.0,autopage==0.5.1,awscli==1.27.1,Babel==2.10.3,backcall==0.2.0,beautifulsoup4==4.11.1,betterproto==1.2.5,black==20.8b1,bleach==5.0.1,boto3==1.24.75,botocore==1.29.1,Brotli==1.0.9,build==0.9.0,cachetools==5.2.0,certifi==2019.11.28,cffi==1.15.1,chardet==3.0.4,charset-normalizer==2.1.1,check-manifest==0.48,clang==5.0,click==8.1.3,cliff==4.0.0,cloudpickle==2.2.0,cmaes==0.8.2,cmake==3.24.1.1,cmd2==2.4.2,colorama==0.4.4,colorlog==6.7.0,contourpy==1.0.5,coverage==6.5.0,cuda-python==11.7.1,cupy-cuda117==10.6.0,cycler==0.11.0,Cython==0.29.32,dask==2022.1.1,dbus-python==1.2.16,debugpy==1.6.3,decorator==5.1.1,defusedxml==0.7.1,dill==0.3.5.1,distlib==0.3.6,distributed==2022.5.1,distro==1.7.0,dm-tree==0.1.6,docker-pycreds==0.4.0,docutils==0.16,emoji==1.7.0,entrypoints==0.4,execnet==1.9.0,executing==1.0.0,faiss==1.7.2,faiss-gpu==1.7.2,fastai==2.7.9,fastapi==0.85.0,fastavro==1.6.1,fastcore==1.5.27,fastdownload==0.0.7,fastjsonschema==2.16.1,fastprogress==1.0.3,fastrlock==0.8,feast==0.19.4,fiddle==0.2.2,filelock==3.8.0,flake8==5.0.4,flatbuffers==1.12,fonttools==4.37.3,frozenlist==1.3.1,fsspec==2022.5.0,gast==0.4.0,gevent==21.12.0,geventhttpclient==2.0.2,gitdb==4.0.9,GitPython==3.1.27,google==3.0.0,google-api-core==2.10.1,google-auth==2.11.1,google-auth-oauthlib==0.4.6,google-pasta==0.2.0,googleapis-common-protos==1.52.0,graphviz==0.20.1,greenlet==1.1.3,grpcio==1.41.0,grpcio-channelz==1.49.0,grpcio-reflection==1.48.1,grpclib==0.4.3,h11==0.13.0,h2==4.1.0,h5py==3.7.0,HeapDict==1.0.1,horovod==0.26.1,hpack==4.0.0,httptools==0.5.0,hugectr2onnx==0.0.0,huggingface-hub==0.9.1,hyperframe==6.0.1,idna==2.8,imagesize==1.4.1,implicit==0.6.1,importlib-metadata==4.12.0,importlib-resources==5.9.0,iniconfig==1.1.1,interrogate==1.5.0,ipykernel==6.15.3,ipython==8.5.0,ipython-genutils==0.2.0,ipywidgets==7.7.0,isort==5.10.1,jedi==0.18.1,Jinja2==3.1.2,jmespath==1.0.1,joblib==1.2.0,json5==0.9.10,jsonschema==4.16.0,jupyter==1.0.0,jupyter-cache==0.4.3,jupyter-console==6.4.4,jupyter-core==4.11.1,jupyter-server==1.18.1,jupyter-server-mathjax==0.2.5,jupyter-sphinx==0.3.2,jupyter_client==7.3.5,jupyterlab==3.4.7,jupyterlab-pygments==0.2.2,jupyterlab-widgets==1.1.0,jupyterlab_server==2.15.1,keras==2.9.0,Keras-Preprocessing==1.1.2,kiwisolver==1.4.4,lazy-object-proxy==1.8.0,libclang==14.0.6,libcst==0.4.7,lightfm==1.16,lightgbm==3.3.2,linkify-it-py==1.0.3,llvmlite==0.39.1,locket==1.0.0,lxml==4.9.1,Mako==1.2.3,Markdown==3.4.1,markdown-it-py==1.1.0,MarkupSafe==2.1.1,matplotlib==3.6.0,matplotlib-inline==0.1.6,mccabe==0.7.0,mdit-py-plugins==0.2.8,merlin-core==0.6.0+1.g5926fcf,merlin-dataloader @ git+https://github.com/NVIDIA-Merlin/dataloader.git@61ca2edae832da4eb2c6b93390c24920e68de1ae,merlin-models==0.9.0+57.g2a56d07c,merlin-systems==0.5.0+4.g15074ad,mistune==2.0.4,mmh3==3.0.0,mpi4py==3.1.3,msgpack==1.0.4,multidict==6.0.2,mypy==0.982,mypy-extensions==0.4.3,myst-nb==0.13.2,myst-parser==0.15.2,natsort==8.1.0,nbclassic==0.4.3,nbclient==0.6.8,nbconvert==7.0.0,nbdime==3.1.1,nbformat==5.5.0,nest-asyncio==1.5.5,ninja==1.10.2.3,notebook==6.4.12,notebook-shim==0.1.0,numba==0.56.2,numpy==1.22.4,nvidia-pyindex==1.0.9,# Editable install with no version control (nvtabular==1.4.0+8.g95e12d347),-e /usr/local/lib/python3.8/dist-packages,nvtx==0.2.5,oauthlib==3.2.1,oldest-supported-numpy==2022.8.16,onnx==1.12.0,onnxruntime==1.11.1,opt-einsum==3.3.0,optuna==3.0.3,packaging==21.3,pandas==1.3.5,pandavro==1.5.2,pandocfilters==1.5.0,parso==0.8.3,partd==1.3.0,pathspec==0.10.1,pathtools==0.1.2,pbr==5.11.0,pep517==0.13.0,pexpect==4.8.0,pickleshare==0.7.5,Pillow==9.2.0,pkgutil_resolve_name==1.3.10,platformdirs==2.5.2,plotly==5.11.0,pluggy==1.0.0,prettytable==3.5.0,prometheus-client==0.14.1,promise==2.3,prompt-toolkit==3.0.31,proto-plus==1.19.6,protobuf==3.19.5,psutil==5.9.2,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyarrow==7.0.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pybind11==2.10.0,pycodestyle==2.9.1,pycparser==2.21,pydantic==1.10.2,pydot==1.4.2,pyflakes==2.5.0,Pygments==2.13.0,PyGObject==3.36.0,pynvml==11.4.1,pyparsing==3.0.9,pyperclip==1.8.2,pyrsistent==0.18.1,pytest==7.1.3,pytest-cov==4.0.0,pytest-xdist==3.0.2,python-apt==2.0.0+ubuntu0.20.4.8,python-dateutil==2.8.2,python-dotenv==0.21.0,python-rapidjson==1.8,pytz==2022.2.1,PyYAML==5.4.1,pyzmq==24.0.0,qtconsole==5.3.2,QtPy==2.2.1,regex==2022.9.13,requests==2.22.0,requests-oauthlib==1.3.1,requests-unixsocket==0.2.0,rsa==4.7.2,s3fs==2022.2.0,s3transfer==0.6.0,sacremoses==0.0.53,scikit-build==0.15.0,scikit-learn==1.1.2,scipy==1.8.1,seedir==0.3.0,Send2Trash==1.8.0,sentry-sdk==1.9.8,setproctitle==1.3.2,setuptools-scm==7.0.5,shortuuid==1.0.9,six==1.15.0,sklearn==0.0,smmap==5.0.0,sniffio==1.3.0,snowballstemmer==2.2.0,sortedcontainers==2.4.0,soupsieve==2.3.2.post1,Sphinx==5.3.0,sphinx-multiversion==0.2.4,sphinx-togglebutton==0.3.1,sphinx_external_toc==0.3.0,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-copydirs @ git+https://github.com/mikemckiernan/sphinxcontrib-copydirs.git@bd8c5d79b3f91cf5f1bb0d6995aeca3fe84b670e,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==2.0.0,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.5,SQLAlchemy==1.4.42,stack-data==0.5.0,starlette==0.20.4,stevedore==4.1.0,stringcase==1.2.0,supervisor==4.1.0,tabulate==0.8.10,tblib==1.7.0,tdqm==0.0.1,tenacity==8.0.1,tensorboard==2.9.1,tensorboard-data-server==0.6.1,tensorboard-plugin-wit==1.8.1,tensorflow==2.9.2,tensorflow-estimator==2.9.0,tensorflow-gpu==2.9.2,tensorflow-io-gcs-filesystem==0.27.0,tensorflow-metadata==1.10.0,tensorflow-ranking==0.5.1,tensorflow-serving-api==2.9.2,termcolor==2.0.1,terminado==0.15.0,testbook==0.4.2,threadpoolctl==3.1.0,tinycss2==1.1.1,tokenizers==0.10.3,toml==0.10.2,tomli==2.0.1,toolz==0.12.0,torch==1.12.1+cu113,torchmetrics==0.3.2,tornado==6.2,tox==3.26.0,tqdm==4.64.1,traitlets==5.4.0,transformers==4.12.0,transformers4rec==0.1.12+2.gbcc939255,treelite==2.3.0,treelite-runtime==2.3.0,tritonclient==2.25.0,typed-ast==1.5.4,typing-inspect==0.8.0,typing_extensions==4.3.0,uc-micro-py==1.0.1,urllib3==1.26.12,uvicorn==0.18.3,uvloop==0.17.0,versioneer==0.20,virtualenv==20.16.5,wandb==0.13.3,watchfiles==0.17.0,wcwidth==0.2.5,webencodings==0.5.1,websocket-client==1.4.1,websockets==10.3,Werkzeug==2.2.2,widgetsnbextension==3.6.0,wrapt==1.12.1,xgboost==1.6.2,yarl==1.8.1,zict==2.2.0,zipp==3.8.1,zope.event==4.5.0,zope.interface==5.4.0
test-gpu run-test-pre: PYTHONHASHSEED='2052535938'
test-gpu run-test: commands[0] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/core.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/core.git
  Cloning https://github.com/NVIDIA-Merlin/core.git to /tmp/pip-req-build-g8m61j_f
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/core.git /tmp/pip-req-build-g8m61j_f
  Resolved https://github.com/NVIDIA-Merlin/core.git to commit 563be4bf5ef675940d5fff2b5e4666424a7f7947
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.10.0)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.0)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (7.0.0)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (0.56.2)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (1.3.5)
Requirement already satisfied: betterproto=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core==0.8.0+5.g563be4b) (3.19.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 9.2 MB/s eta 0:00:00
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.4.1)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (8.1.3)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 114.4 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (6.2)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.4.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.7.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (5.9.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.26.12)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.2.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.4)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (0.39.1)
Requirement already satisfied: setuptools=0.54->merlin-core==0.8.0+5.g563be4b) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (4.12.0)
Requirement already satisfied: numpy=1.18 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core==0.8.0+5.g563be4b) (1.22.4)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core==0.8.0+5.g563be4b) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core==0.8.0+5.g563be4b) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core==0.8.0+5.g563be4b) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core==0.8.0+5.g563be4b) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core==0.8.0+5.g563be4b) (6.0.1)
Building wheels for collected packages: merlin-core
  Building wheel for merlin-core (pyproject.toml): started
  Building wheel for merlin-core (pyproject.toml): finished with status 'done'
  Created wheel for merlin-core: filename=merlin_core-0.8.0+5.g563be4b-py3-none-any.whl size=118346 sha256=a1c79a2a7360a76b125492bee1c82422306451abf2c1c897808c871cd1b3bb86
  Stored in directory: /tmp/pip-ephem-wheel-cache-ng93thqz/wheels/c8/38/16/a6968787eafcec5fa772148af8408b089562f71af0752e8e84
Successfully built merlin-core
Installing collected packages: dask, merlin-core
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: merlin-core
    Found existing installation: merlin-core 0.6.0+1.g5926fcf
    Not uninstalling merlin-core at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'merlin-core'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 merlin-core-0.8.0+5.g563be4b
test-gpu run-test: commands[1] | python -m pip install --upgrade git+https://github.com/NVIDIA-Merlin/nvtabular.git
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/NVIDIA-Merlin/nvtabular.git
  Cloning https://github.com/NVIDIA-Merlin/nvtabular.git to /tmp/pip-req-build-gzuv1m9l
  Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA-Merlin/nvtabular.git /tmp/pip-req-build-gzuv1m9l
  Resolved https://github.com/NVIDIA-Merlin/nvtabular.git to commit 8e7edbafd3006f56e73efdc0c01c4445ab57d028
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (1.8.1)
Requirement already satisfied: merlin-core>=0.2.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.6.0+1.g5926fcf)
Requirement already satisfied: merlin-dataloader>=0.0.2 in ./.tox/test-gpu/lib/python3.8/site-packages (from nvtabular==1.6.0+3.g8e7edbaf) (0.0.2+2.g61ca2ed)
Requirement already satisfied: tensorflow-metadata>=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.10.0)
Requirement already satisfied: fsspec==2022.5.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.0)
Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (7.0.0)
Requirement already satisfied: tqdm>=4.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.64.1)
Requirement already satisfied: distributed>=2022.3.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.5.1)
Requirement already satisfied: numba>=0.54 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.56.2)
Requirement already satisfied: pandas=1.2.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.5)
Requirement already satisfied: betterproto=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (21.3)
Requirement already satisfied: protobuf>=3.0.0 in /usr/local/lib/python3.8/dist-packages (from merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.19.5)
Collecting dask>=2022.3.0
  Downloading dask-2022.10.2-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 10.2 MB/s eta 0:00:00
Requirement already satisfied: numpy=1.17.3 in /usr/local/lib/python3.8/dist-packages (from scipy->nvtabular==1.6.0+3.g8e7edbaf) (1.22.4)
Requirement already satisfied: grpclib in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.4.3)
Requirement already satisfied: stringcase in /usr/local/lib/python3.8/dist-packages (from betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.12.0)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.3.0)
Requirement already satisfied: pyyaml>=5.3.1 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.4.1)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.8/dist-packages (from dask>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (8.1.3)
  Downloading dask-2022.5.1-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 99.1 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.1.2)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.2)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.4.0)
Requirement already satisfied: locket>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.7.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (5.9.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.26.12)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.2.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.4)
Requirement already satisfied: llvmlite=0.39.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (0.39.1)
Requirement already satisfied: setuptools=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (59.8.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.8/dist-packages (from numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.12.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2022.2.1)
Requirement already satisfied: absl-py=0.9 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.2.0)
Requirement already satisfied: googleapis-common-protos=1.52.0 in /usr/local/lib/python3.8/dist-packages (from tensorflow-metadata>=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.52.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas=1.2.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (1.0.1)
Requirement already satisfied: h2=3.1.0 in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.1.0)
Requirement already satisfied: multidict in /usr/local/lib/python3.8/dist-packages (from grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata->numba>=0.54->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (3.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->distributed>=2022.3.0->merlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (2.1.1)
Requirement already satisfied: hpack=4.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (4.0.0)
Requirement already satisfied: hyperframe=6.0 in /usr/local/lib/python3.8/dist-packages (from h2=3.1.0->grpclib->betterprotomerlin-core>=0.2.0->nvtabular==1.6.0+3.g8e7edbaf) (6.0.1)
Building wheels for collected packages: nvtabular
  Building wheel for nvtabular (pyproject.toml): started
  Building wheel for nvtabular (pyproject.toml): finished with status 'done'
  Created wheel for nvtabular: filename=nvtabular-1.6.0+3.g8e7edbaf-cp38-cp38-linux_x86_64.whl size=257595 sha256=5e4f93c4fcc27e1f67643a6e5968cf7da4cdcab1d8220e36f32e89363ea4f325
  Stored in directory: /tmp/pip-ephem-wheel-cache-7acg39qg/wheels/8f/d9/f9/30f2cdc5bf8787fae6fdfe55afd6e1b493e619ec32c32ec40b
Successfully built nvtabular
Installing collected packages: dask, nvtabular
  Attempting uninstall: dask
    Found existing installation: dask 2022.1.1
    Not uninstalling dask at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'dask'. No files were found to uninstall.
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 1.4.0+8.g95e12d347
    Not uninstalling nvtabular at /usr/local/lib/python3.8/dist-packages, outside environment /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu
    Can't uninstall 'nvtabular'. No files were found to uninstall.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask=2021.*, but you have dask 2022.5.1 which is incompatible.
Successfully installed dask-2022.5.1 nvtabular-1.6.0+3.g8e7edbaf
test-gpu run-test: commands[2] | python -m pytest --cov-report term --cov merlin -rxs tests/unit
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
cachedir: .tox/test-gpu/.pytest_cache
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-3.0.2, cov-4.0.0
collected 839 items

tests/unit/config/test_schema.py .... [ 0%] tests/unit/datasets/test_advertising.py .s [ 0%] tests/unit/datasets/test_ecommerce.py ..sss [ 1%] tests/unit/datasets/test_entertainment.py ....sss. [ 2%] tests/unit/datasets/test_social.py . [ 2%] tests/unit/datasets/test_synthetic.py ...... [ 3%] tests/unit/implicit/test_implicit.py . [ 3%] tests/unit/lightfm/test_lightfm.py . [ 3%] tests/unit/tf/test_core.py ...... [ 4%] tests/unit/tf/test_loader.py ................ [ 5%] tests/unit/tf/test_public_api.py . [ 6%] tests/unit/tf/blocks/test_cross.py ........... [ 7%] tests/unit/tf/blocks/test_dlrm.py .......... [ 8%] tests/unit/tf/blocks/test_interactions.py ... [ 8%] tests/unit/tf/blocks/test_mlp.py ................................. [ 12%] tests/unit/tf/blocks/test_optimizer.py s................................ [ 16%] ..................... [ 19%] tests/unit/tf/blocks/retrieval/test_base.py . [ 19%] tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 19%] tests/unit/tf/blocks/retrieval/test_two_tower.py ............ [ 21%] tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 21%] tests/unit/tf/blocks/sampling/test_in_batch.py . [ 21%] tests/unit/tf/core/test_aggregation.py ......... [ 22%] tests/unit/tf/core/test_base.py .. [ 22%] tests/unit/tf/core/test_combinators.py s..................... [ 25%] tests/unit/tf/core/test_encoder.py .. [ 25%] tests/unit/tf/core/test_index.py ... [ 25%] tests/unit/tf/core/test_prediction.py .. [ 26%] tests/unit/tf/core/test_tabular.py ...... [ 26%] tests/unit/tf/examples/test_01_getting_started.py . [ 26%] tests/unit/tf/examples/test_02_dataschema.py . [ 27%] tests/unit/tf/examples/test_03_exploring_different_models.py . [ 27%] tests/unit/tf/examples/test_04_export_ranking_models.py . [ 27%] tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 27%] tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 27%] tests/unit/tf/examples/test_07_train_traditional_models.py . [ 27%] tests/unit/tf/examples/test_usecase_accelerate_training_by_lazyadam.py . [ 27%] [ 27%] tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 27%] tests/unit/tf/examples/test_usecase_incremental_training_layer_freezing.py . [ 28%] [ 28%] tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 28%] tests/unit/tf/examples/test_usecase_retrieval_with_hpo.py . [ 28%] tests/unit/tf/horovod/test_horovod.py ... [ 28%] tests/unit/tf/inputs/test_continuous.py ........ [ 29%] tests/unit/tf/inputs/test_embedding.py ................................. [ 33%] ........ [ 34%] tests/unit/tf/inputs/test_tabular.py .................. [ 36%] tests/unit/tf/layers/test_queue.py .............. [ 38%] tests/unit/tf/losses/test_losses.py ....................... [ 41%] tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 41%] tests/unit/tf/metrics/test_metrics_topk.py ......................... [ 44%] tests/unit/tf/models/test_base.py s........................ [ 47%] tests/unit/tf/models/test_benchmark.py .. [ 47%] tests/unit/tf/models/test_ranking.py .................................. [ 51%] tests/unit/tf/models/test_retrieval.py F................................ [ 55%] ........................................... [ 60%] tests/unit/tf/outputs/test_base.py ...... [ 61%] tests/unit/tf/outputs/test_classification.py ...... [ 62%] tests/unit/tf/outputs/test_contrastive.py .............. [ 64%] tests/unit/tf/outputs/test_regression.py .. [ 64%] tests/unit/tf/outputs/test_sampling.py .... [ 64%] tests/unit/tf/outputs/test_topk.py . [ 64%] tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%] tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 66%] tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 67%] tests/unit/tf/prediction_tasks/test_regression.py ..... [ 68%] tests/unit/tf/prediction_tasks/test_retrieval.py . [ 68%] tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 69%] tests/unit/tf/transformers/test_block.py ..................... [ 71%] tests/unit/tf/transformers/test_transforms.py .......... [ 72%] tests/unit/tf/transforms/test_bias.py .. [ 72%] tests/unit/tf/transforms/test_features.py s............................. [ 76%] .......................s...... [ 80%] tests/unit/tf/transforms/test_negative_sampling.py ......... [ 81%] tests/unit/tf/transforms/test_noise.py ..... [ 81%] tests/unit/tf/transforms/test_sequence.py .................... [ 84%] tests/unit/tf/transforms/test_tensor.py ... [ 84%] tests/unit/tf/utils/test_batch.py .... [ 84%] tests/unit/tf/utils/test_dataset.py .. [ 85%] tests/unit/tf/utils/test_tf_utils.py ..... [ 85%] tests/unit/torch/test_dataset.py ......... [ 86%] tests/unit/torch/test_public_api.py . [ 87%] tests/unit/torch/block/test_base.py .... [ 87%] tests/unit/torch/block/test_mlp.py . [ 87%] tests/unit/torch/features/test_continuous.py .. [ 87%] tests/unit/torch/features/test_embedding.py .............. [ 89%] tests/unit/torch/features/test_tabular.py .... [ 89%] tests/unit/torch/model/test_head.py ............ [ 91%] tests/unit/torch/model/test_model.py .. [ 91%] tests/unit/torch/tabular/test_aggregation.py ........ [ 92%] tests/unit/torch/tabular/test_tabular.py ... [ 92%] tests/unit/torch/tabular/test_transformations.py ....... [ 93%] tests/unit/utils/test_schema_utils.py ................................ [ 97%] tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES =================================== _______________________ test_two_tower_shared_embeddings _______________________

def test_two_tower_shared_embeddings():
    train = make_df(
        {
            "user_id": [1, 3, 3, 4, 3, 1, 2, 4, 6, 7, 8, 9] * 100,
            "item_id": [1, 2, 3, 4, 11, 12, 5, 1, 1, 3, 5, 11] * 100,
            "item_id_hist": [
                [1, 3, 10],
                [1, 5],
                [4, 2, 1],
                [1, 2, 3],
                [1],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [3, 4],
                [1, 3, 10],
                [11, 3, 10],
                [1, 11],
            ]
            * 100,
        }
    )

    user_id = ["user_id"] >> nvt.ops.Categorify() >> nvt.ops.TagAsUserID()

    joint_feats = [["item_id_hist", "item_id"]] >> nvt.ops.Categorify()

    item_id = joint_feats["item_id"] >> nvt.ops.TagAsItemID()
    user_feat = joint_feats["item_id_hist"] >> nvt.ops.TagAsUserFeatures()
    outputs = user_id + item_id + user_feat

    train_dataset = Dataset(train)

    workflow = nvt.Workflow(outputs)
  workflow.fit(train_dataset)

tests/unit/tf/models/test_retrieval.py:59:


/usr/local/lib/python3.8/dist-packages/nvtabular/workflow/workflow.py:198: in fit self.executor.fit(ddf, current_phase) /usr/local/lib/python3.8/dist-packages/merlin/dag/executors.py:243: in fit results = dask.compute(stats, scheduler="synchronous")[0] /usr/local/lib/python3.8/dist-packages/dask/base.py:571: in compute results = schedule(dsk, keys, **kwargs) /usr/local/lib/python3.8/dist-packages/dask/local.py:553: in get_sync return get_async( /usr/local/lib/python3.8/dist-packages/dask/local.py:496: in get_async for key, res_info, failed in queue_get(queue).result(): /usr/lib/python3.8/concurrent/futures/_base.py:437: in result return self.__get_result() /usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result raise self._exception /usr/local/lib/python3.8/dist-packages/dask/local.py:538: in submit fut.set_result(fn(*args, **kwargs)) /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in batch_execute_tasks return [execute_task(*a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:234: in return [execute_task(a) for a in it] /usr/local/lib/python3.8/dist-packages/dask/local.py:225: in execute_task result = pack_exception(e, dumps) /usr/local/lib/python3.8/dist-packages/dask/local.py:220: in execute_task result = _execute_task(task, data) /usr/local/lib/python3.8/dist-packages/dask/core.py:119: in _execute_task return func((_execute_task(a, cache) for a in args)) /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/nvtabular/ops/categorify.py:760: in _top_level_groupby df_gb[cat_col_selector_str] = _concat( /usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:119: in _concat else methods.concat(args2, uniform=True, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/dask/dataframe/dispatch.py:61: in concat return func( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/dask_cudf/backends.py:274: in concat_cudf return cudf.concat(dfs, axis=axis, ignore_index=ignore_index) /usr/local/lib/python3.8/dist-packages/cudf/core/reshape.py:410: in concat return cudf.Series._concat( /usr/local/lib/python3.8/dist-packages/nvtx/nvtx.py:101: in inner result = func(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/cudf/core/series.py:1270: in _concat common_dtype = find_common_type([obj.dtype for obj in objs]) /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:575: in find_common_type td_dtypes = set( /usr/local/lib/python3.8/dist-packages/cudf/utils/dtypes.py:576: in filter(lambda t: pd.api.types.is_timedelta64_dtype(t), dtypes) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:419: in is_timedelta64_dtype return _is_dtype_type(arr_or_dtype, classes(np.timedelta64)) /usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:1619: in _is_dtype_type return condition(tipo)


tipo =

return lambda tipo: issubclass(tipo, klasses) E TypeError: issubclass() arg 1 must be a class

/usr/local/lib/python3.8/dist-packages/pandas/core/dtypes/common.py:146: TypeError =============================== warnings summary =============================== ../../../../../usr/lib/python3/dist-packages/requests/init.py:89 /usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41 /usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 6 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 6 warnings tests/unit/tf/core/test_index.py: 8 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 38 warnings tests/unit/tf/models/test_retrieval.py: 120 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning tests/unit/tf/transformers/test_block.py: 15 warnings tests/unit/tf/transforms/test_bias.py: 2 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_noise.py: 1 warning tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 9 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 3 warnings tests/unit/xgb/test_xgboost.py: 18 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings tests/unit/datasets/test_entertainment.py: 4 warnings tests/unit/datasets/test_social.py: 1 warning tests/unit/datasets/test_synthetic.py: 5 warnings tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_core.py: 6 warnings tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/test_cross.py: 5 warnings tests/unit/tf/blocks/test_dlrm.py: 9 warnings tests/unit/tf/blocks/test_interactions.py: 2 warnings tests/unit/tf/blocks/test_mlp.py: 26 warnings tests/unit/tf/blocks/test_optimizer.py: 30 warnings tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 11 warnings tests/unit/tf/core/test_aggregation.py: 6 warnings tests/unit/tf/core/test_base.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 7 warnings tests/unit/tf/core/test_index.py: 3 warnings tests/unit/tf/core/test_prediction.py: 2 warnings tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 7 warnings tests/unit/tf/inputs/test_embedding.py: 20 warnings tests/unit/tf/inputs/test_tabular.py: 18 warnings tests/unit/tf/models/test_base.py: 27 warnings tests/unit/tf/models/test_benchmark.py: 2 warnings tests/unit/tf/models/test_ranking.py: 36 warnings tests/unit/tf/models/test_retrieval.py: 86 warnings tests/unit/tf/outputs/test_base.py: 6 warnings tests/unit/tf/outputs/test_classification.py: 6 warnings tests/unit/tf/outputs/test_contrastive.py: 19 warnings tests/unit/tf/outputs/test_regression.py: 2 warnings tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings tests/unit/tf/transformers/test_block.py: 9 warnings tests/unit/tf/transforms/test_features.py: 10 warnings tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings tests/unit/tf/transforms/test_sequence.py: 15 warnings tests/unit/tf/utils/test_batch.py: 7 warnings tests/unit/tf/utils/test_dataset.py: 2 warnings tests/unit/torch/block/test_base.py: 4 warnings tests/unit/torch/block/test_mlp.py: 1 warning tests/unit/torch/features/test_continuous.py: 1 warning tests/unit/torch/features/test_embedding.py: 4 warnings tests/unit/torch/features/test_tabular.py: 4 warnings tests/unit/torch/model/test_head.py: 12 warnings tests/unit/torch/model/test_model.py: 2 warnings tests/unit/torch/tabular/test_aggregation.py: 6 warnings tests/unit/torch/tabular/test_transformations.py: 2 warnings tests/unit/xgb/test_xgboost.py: 17 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning tests/unit/implicit/test_implicit.py: 1 warning tests/unit/lightfm/test_lightfm.py: 1 warning tests/unit/tf/test_loader.py: 1 warning tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings tests/unit/tf/core/test_combinators.py: 11 warnings tests/unit/tf/core/test_encoder.py: 2 warnings tests/unit/tf/core/test_prediction.py: 1 warning tests/unit/tf/horovod/test_horovod.py: 1 warning tests/unit/tf/inputs/test_continuous.py: 4 warnings tests/unit/tf/inputs/test_embedding.py: 9 warnings tests/unit/tf/inputs/test_tabular.py: 8 warnings tests/unit/tf/models/test_ranking.py: 20 warnings tests/unit/tf/models/test_retrieval.py: 10 warnings tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings tests/unit/xgb/test_xgboost.py: 12 warnings /usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>]. warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:968: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/blocks/retrieval/test_two_tower.py: 1 warning tests/unit/tf/core/test_index.py: 4 warnings tests/unit/tf/horovod/test_horovod.py: 3 warnings tests/unit/tf/models/test_retrieval.py: 54 warnings tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings tests/unit/tf/utils/test_batch.py: 2 warnings /tmp/autograph_generated_filesjr4kq9d.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead ag.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/core/test_combinators.py::test_parallel_block_select_by_tags /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/tabular.py:602: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working elif isinstance(self.feature_names, collections.Sequence):

tests/unit/tf/core/test_encoder.py: 1 warning tests/unit/tf/core/test_index.py: 5 warnings tests/unit/tf/models/test_retrieval.py: 30 warnings tests/unit/tf/utils/test_batch.py: 4 warnings tests/unit/tf/utils/test_dataset.py: 1 warning /var/jenkins_home/workspace/merlin_models/models/merlin/models/utils/dataset.py:75: DeprecationWarning: unique_rows_by_features is deprecated and will be removed in a future version. Please use unique_by_tag instead. warnings.warn(

tests/unit/tf/models/test_base.py::test_model_pre_post[True] tests/unit/tf/models/test_base.py::test_model_pre_post[False] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5] tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7] /usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead. return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True] tests/unit/tf/models/test_base.py::test_freeze_sequential_block tests/unit/tf/models/test_base.py::test_freeze_unfreeze tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks /usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead. super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_base.py::test_retrieval_model_query tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_v2_export_embeddings tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[True] tests/unit/tf/models/test_retrieval.py::test_youtube_dnn_topk_evaluation[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/utils/tf_utils.py:298: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack tensor_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(tensor)))

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False] tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True] tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:569: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_as_classfication_model[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/bert_block/prepare_transformer_inputs_1/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_causal_language_modeling[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None, 48), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/prepare_transformer_inputs_5/RaggedToTensor/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_3:0", shape=(None,), dtype=int64), values=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Reshape_2:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_1:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling[False] tests/unit/tf/transformers/test_block.py::test_transformer_with_masked_language_modeling_check_eval_masked[False] /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/gpt2_block/replace_masked_embeddings/RaggedWhere/RaggedTile_2/Reshape_3:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/concat_features/RaggedConcat/Slice_3:0", shape=(None, None), dtype=float32), dense_shape=Tensor("gradient_tape/model/concat_features/RaggedConcat/Shape_1:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block /var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.) return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix] tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix] tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple tests/unit/xgb/test_xgboost.py::TestEvals::test_default tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data /var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:344: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres']. warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective /usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings /usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited client.wait_for_workers(n_workers) Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings /usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 ----------- Name Stmts Miss Cover

merlin/datasets/init.py 2 0 100% merlin/datasets/advertising/init.py 2 0 100% merlin/datasets/advertising/criteo/init.py 0 0 100% merlin/datasets/advertising/criteo/dataset.py 79 49 38% merlin/datasets/advertising/criteo/transformed/init.py 0 0 100% merlin/datasets/ecommerce/init.py 3 0 100% merlin/datasets/ecommerce/aliccp/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/dataset.py 141 87 38% merlin/datasets/ecommerce/aliccp/raw/init.py 0 0 100% merlin/datasets/ecommerce/aliccp/transformed/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/init.py 0 0 100% merlin/datasets/ecommerce/dressipi/dataset.py 45 37 18% merlin/datasets/ecommerce/dressipi/preprocessed/init.py 0 0 100% merlin/datasets/ecommerce/large/init.py 0 0 100% merlin/datasets/ecommerce/small/init.py 0 0 100% merlin/datasets/ecommerce/transactions/init.py 0 0 100% merlin/datasets/entertainment/init.py 2 0 100% merlin/datasets/entertainment/movielens/1m-raw/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m-raw/ratings/init.py 0 0 100% merlin/datasets/entertainment/movielens/1m/init.py 0 0 100% merlin/datasets/entertainment/movielens/25m/init.py 0 0 100% merlin/datasets/entertainment/movielens/100k/init.py 0 0 100% merlin/datasets/entertainment/movielens/init.py 0 0 100% merlin/datasets/entertainment/movielens/dataset.py 152 122 20% merlin/datasets/entertainment/music_streaming/init.py 0 0 100% merlin/datasets/social/init.py 0 0 100% merlin/datasets/synthetic.py 147 15 90% merlin/datasets/testing/init.py 0 0 100% merlin/datasets/testing/sequence_testing/init.py 0 0 100% merlin/models/init.py 2 0 100% merlin/models/_version.py 354 205 42% merlin/models/api.py 14 5 64% merlin/models/config/init.py 0 0 100% merlin/models/config/schema.py 62 0 100% merlin/models/implicit/init.py 27 4 85% merlin/models/io.py 15 0 100% merlin/models/lightfm/init.py 23 0 100% merlin/models/loader/init.py 0 0 100% merlin/models/loader/backend.py 379 41 89% merlin/models/loader/dataframe_iter.py 21 17 19% merlin/models/loader/tf_utils.py 57 27 53% merlin/models/loader/utils.py 40 15 62% merlin/models/tf/init.py 70 0 100% merlin/models/tf/blocks/init.py 0 0 100% merlin/models/tf/blocks/cross.py 44 0 100% merlin/models/tf/blocks/dlrm.py 49 2 96% merlin/models/tf/blocks/experts.py 99 17 83% merlin/models/tf/blocks/interaction.py 108 40 63% merlin/models/tf/blocks/mlp.py 114 10 91% merlin/models/tf/blocks/optimizer.py 173 12 93% merlin/models/tf/blocks/retrieval/init.py 0 0 100% merlin/models/tf/blocks/retrieval/base.py 169 62 63% merlin/models/tf/blocks/retrieval/matrix_factorization.py 35 1 97% merlin/models/tf/blocks/retrieval/two_tower.py 30 0 100% merlin/models/tf/blocks/sampling/init.py 0 0 100% merlin/models/tf/blocks/sampling/base.py 29 2 93% merlin/models/tf/blocks/sampling/cross_batch.py 46 2 96% merlin/models/tf/blocks/sampling/in_batch.py 35 0 100% merlin/models/tf/blocks/sampling/queue.py 115 12 90% merlin/models/tf/core/init.py 0 0 100% merlin/models/tf/core/aggregation.py 241 49 80% merlin/models/tf/core/base.py 244 55 77% merlin/models/tf/core/combinators.py 426 53 88% merlin/models/tf/core/encoder.py 170 29 83% merlin/models/tf/core/index.py 104 16 85% merlin/models/tf/core/prediction.py 50 1 98% merlin/models/tf/core/tabular.py 280 29 90% merlin/models/tf/distributed/init.py 0 0 100% merlin/models/tf/distributed/backend.py 9 2 78% merlin/models/tf/inputs/init.py 0 0 100% merlin/models/tf/inputs/base.py 60 10 83% merlin/models/tf/inputs/continuous.py 39 3 92% merlin/models/tf/inputs/embedding.py 458 31 93% merlin/models/tf/loader.py 268 94 65% merlin/models/tf/losses/init.py 4 0 100% merlin/models/tf/losses/base.py 9 0 100% merlin/models/tf/losses/listwise.py 13 0 100% merlin/models/tf/losses/pairwise.py 115 1 99% merlin/models/tf/metrics/init.py 2 0 100% merlin/models/tf/metrics/evaluation.py 105 48 54% merlin/models/tf/metrics/topk.py 198 48 76% merlin/models/tf/models/init.py 0 0 100% merlin/models/tf/models/base.py 751 102 86% merlin/models/tf/models/benchmark.py 16 0 100% merlin/models/tf/models/ranking.py 67 3 96% merlin/models/tf/models/retrieval.py 78 4 95% merlin/models/tf/models/utils.py 10 1 90% merlin/models/tf/outputs/init.py 0 0 100% merlin/models/tf/outputs/base.py 122 17 86% merlin/models/tf/outputs/classification.py 91 1 99% merlin/models/tf/outputs/contrastive.py 147 10 93% merlin/models/tf/outputs/regression.py 9 0 100% merlin/models/tf/outputs/sampling/init.py 0 0 100% merlin/models/tf/outputs/sampling/base.py 78 21 73% merlin/models/tf/outputs/sampling/in_batch.py 37 1 97% merlin/models/tf/outputs/sampling/popularity.py 27 1 96% merlin/models/tf/outputs/topk.py 97 6 94% merlin/models/tf/prediction_tasks/init.py 0 0 100% merlin/models/tf/prediction_tasks/base.py 207 37 82% merlin/models/tf/prediction_tasks/classification.py 68 17 75% merlin/models/tf/prediction_tasks/multi.py 7 0 100% merlin/models/tf/prediction_tasks/next_item.py 59 6 90% merlin/models/tf/prediction_tasks/regression.py 35 2 94% merlin/models/tf/prediction_tasks/retrieval.py 73 3 96% merlin/models/tf/transformers/init.py 0 0 100% merlin/models/tf/transformers/block.py 101 2 98% merlin/models/tf/transformers/transforms.py 63 0 100% merlin/models/tf/transforms/init.py 0 0 100% merlin/models/tf/transforms/bias.py 111 9 92% merlin/models/tf/transforms/features.py 435 36 92% merlin/models/tf/transforms/negative_sampling.py 77 3 96% merlin/models/tf/transforms/noise.py 43 1 98% merlin/models/tf/transforms/regularization.py 17 1 94% merlin/models/tf/transforms/sequence.py 282 42 85% merlin/models/tf/transforms/tensor.py 158 13 92% merlin/models/tf/typing.py 7 0 100% merlin/models/tf/utils/init.py 0 0 100% merlin/models/tf/utils/batch_utils.py 85 4 95% merlin/models/tf/utils/repr_utils.py 69 4 94% merlin/models/tf/utils/search_utils.py 34 22 35% merlin/models/tf/utils/testing_utils.py 206 36 83% merlin/models/tf/utils/tf_utils.py 209 42 80% merlin/models/torch/init.py 12 0 100% merlin/models/torch/block/init.py 0 0 100% merlin/models/torch/block/base.py 167 32 81% merlin/models/torch/block/mlp.py 38 5 87% merlin/models/torch/dataset.py 68 5 93% merlin/models/torch/features/init.py 0 0 100% merlin/models/torch/features/base.py 4 0 100% merlin/models/torch/features/continuous.py 22 0 100% merlin/models/torch/features/embedding.py 165 12 93% merlin/models/torch/features/tabular.py 65 8 88% merlin/models/torch/losses.py 28 28 0% merlin/models/torch/model/init.py 0 0 100% merlin/models/torch/model/base.py 286 65 77% merlin/models/torch/model/prediction_task.py 24 0 100% merlin/models/torch/tabular/init.py 0 0 100% merlin/models/torch/tabular/aggregation.py 75 0 100% merlin/models/torch/tabular/base.py 247 39 84% merlin/models/torch/tabular/transformations.py 67 3 96% merlin/models/torch/typing.py 6 0 100% merlin/models/torch/utils/init.py 0 0 100% merlin/models/torch/utils/data_utils.py 117 117 0% merlin/models/torch/utils/examples_utils.py 39 39 0% merlin/models/torch/utils/torch_utils.py 80 22 72% merlin/models/utils/init.py 0 0 100% merlin/models/utils/constants.py 3 0 100% merlin/models/utils/dataset.py 38 4 89% merlin/models/utils/dependencies.py 26 19 27% merlin/models/utils/doc_utils.py 10 0 100% merlin/models/utils/example_utils.py 31 2 94% merlin/models/utils/misc_utils.py 118 90 24% merlin/models/utils/nvt_utils.py 27 24 11% merlin/models/utils/registry.py 101 29 71% merlin/models/utils/schema_utils.py 90 19 79% merlin/models/xgb/init.py 124 4 97%

TOTAL 11432 2266 80%

=========================== short test summary info ============================ SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR SKIPPED [5] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test. ==== 1 failed, 826 passed, 12 skipped, 1358 warnings in 2525.44s (0:42:05) ===== ERROR: InvocationError for command /var/jenkins_home/workspace/merlin_models/models/.tox/test-gpu/bin/python -m pytest --cov-report term --cov merlin -rxs tests/unit (exited with code 1) ___________________________________ summary ____________________________________ ERROR: test-gpu: commands failed Build step 'Execute shell' marked build as failure Performing Post build task... Match found for : : True Logical operation result is TRUE Running script : #!/bin/bash cd /var/jenkins_home/ CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" [merlin_models] $ /bin/bash /tmp/jenkins10049102532422491528.sh

nvidia-merlin-bot avatar Nov 04 '22 01:11 nvidia-merlin-bot

One test on Jenkins which was added very recently in https://github.com/NVIDIA-Merlin/models/pull/841 is failing. It seems like introducing horovod in the dependency or something else caused a dependency conflict with dask, causing an error somewhere in nvtabular + dask, unrelated to horovod. I see:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
feast 0.19.4 requires dask<2022.02.0,>=2021.*, but you have dask 2022.5.1 which is incompatible.

which I don't see in other CI logs. Need to resolve the dependency conflict somehow, but I can't seem to reproduce the error locally.

edknv avatar Nov 04 '22 04:11 edknv