inference
inference copied to clipboard
issue while inference/recommendation/dlrm/pytorch/ running with CPU docker.
Hi,
I was running dlrm pytorch with CPU docker by using fake data. seeing below error.
/root/mlcommons/recommendation/dlrm/pytorch/python/dlrm_data_pytorch.py:328: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at /opt/conda/conda-bld/pytorch_1670525496686/work/torch/csrc/utils/tensor_new.cpp:230.)
X_int = torch.log(torch.tensor(transposed_data[0], dtype=torch.float) + 1)
Traceback (most recent call last):
File "python/main.py", line 619, in
Command i ran using CPU docker: ./run_local.sh terabyte cpu --max-ind-range=10000000
Machine used: x86_64 GNU/Linux
**Steps followed: ** cd $HOME/mlcommons/inference/recommendation/dlrm/pytorch/docker_cpu ./build_docker_cpu.sh cd $HOME/mlcommons/inference/recommendation/dlrm/pytorch/docker_cpu ./run_docker_cpu.sh cd mlcommons/recommendation/dlrm/pytorch ./run_local.sh terabyte cpu --max-ind-range=10000000
please provide the solution to this.
Thanks, Siva