pytorch-lr-scheduler icon indicating copy to clipboard operation
pytorch-lr-scheduler copied to clipboard

Multiprocess utilization

Open mirth opened this issue 6 years ago • 7 comments

It seems that adding more cores doesn't improve speed proportionally. Why is that?

mirth avatar Jan 15 '19 16:01 mirth

Are you sure you add cores rather than threads? What is the size of the graph you are trying to process? Most likely it's a memory issue.

xgfs avatar Jan 15 '19 17:01 xgfs

I increase threads parameter while running on 64 core machine.

mirth avatar Jan 16 '19 21:01 mirth

If the size of the graph is really large, there might still be memory issues, especially in NUMA architectures. Can you (roughly) specify the size of the graph, and memory system used?

xgfs avatar Jan 17 '19 07:01 xgfs

Sorry for not starting with numbers first.

./deepwalk -input in.bcsr -output out.model -threads 64 -dim 300 -nwalks 1000 -walklen 5 -window 3 -seed 4242 -verbose 2 nv: 747556, ne: 861549483 PR estimate complete Using vectorized operations Constructing HSM tree... Done! Average code size: 20.7494 lr 0.000002, Progress 100.00% Calculations took 8225.28 s to run

./deepwalk -input in.bcsr -output out.model -threads 8 -dim 300 -nwalks 1000 -walklen 5 -window 3 -seed 4242 -verbose 2 nv: 747556, ne: 861549483 PR estimate complete Using vectorized operations Constructing HSM tree... Done! Average code size: 20.7604 lr 0.000002, Progress 100.00% Calculations took 12681.46 s to run

I ran it on 64 cores & 256Gb mem ec2 instance. The graph bcsr file is 3.3Gb. The process consumes 5.5Gb of memory while runnnig.

mirth avatar Jan 18 '19 18:01 mirth

From this limited information my most probably cause is memory access time. There is single DRAM contoller fetching graph (and embedding) parts from the memory, causing a bottleneck. This is probably not solvable unless you change the algorithm. If you are okay with higher memory consumption, you can consider an implementation that caches random walks in the memory.

On completely different issue, parameters you use seem quite a bit off. Why are you chaning the defaults?

xgfs avatar Jan 20 '19 17:01 xgfs

You mean -nwalks 1000 -walklen 5 -window 3? I just playing with parameters to see how it behaves.

mirth avatar Jan 21 '19 10:01 mirth

If you want to see where the problem comes from, I would recommend running the process under GNU perf (tutorial here http://www.brendangregg.com/perf.html). I would expect that the running time is dominated with the memory access.

xgfs avatar Jan 23 '19 12:01 xgfs