models
models copied to clipboard
Performance issue in /models/recommendation/tensorflow (by P3)
Hello! I've found a performance issue in /wide_deep/inference/fp32/wide_deep_inference.py: dataset.batch(batch_size)(line 192) should be called before dataset.map(parse_csv, num_parallel_calls=5)(line 187), which could make your program more efficient.
Here is the tensorflow document to support it.
Besides, you need to check the function parse_csv called in dataset.map(parse_csv, num_parallel_calls=5) whether to be affected or not to make the changed code work properly. For example, if parse_csv needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z) after fix.
Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.
@DLPerf Thanks for bringing up the issue. If you'd like to create a PR, that would great.
@DLPerf do you still need assistance with this issue?