pytorch-lr-scheduler
pytorch-lr-scheduler copied to clipboard
Multiprocess utilization
It seems that adding more cores doesn't improve speed proportionally. Why is that?
Are you sure you add cores rather than threads? What is the size of the graph you are trying to process? Most likely it's a memory issue.
I increase threads parameter while running on 64 core machine.
If the size of the graph is really large, there might still be memory issues, especially in NUMA architectures. Can you (roughly) specify the size of the graph, and memory system used?
Sorry for not starting with numbers first.
./deepwalk -input in.bcsr -output out.model -threads 64 -dim 300 -nwalks 1000 -walklen 5 -window 3 -seed 4242 -verbose 2 nv: 747556, ne: 861549483 PR estimate complete Using vectorized operations Constructing HSM tree... Done! Average code size: 20.7494 lr 0.000002, Progress 100.00% Calculations took 8225.28 s to run
./deepwalk -input in.bcsr -output out.model -threads 8 -dim 300 -nwalks 1000 -walklen 5 -window 3 -seed 4242 -verbose 2 nv: 747556, ne: 861549483 PR estimate complete Using vectorized operations Constructing HSM tree... Done! Average code size: 20.7604 lr 0.000002, Progress 100.00% Calculations took 12681.46 s to run
I ran it on 64 cores & 256Gb mem ec2 instance. The graph bcsr file is 3.3Gb. The process consumes 5.5Gb of memory while runnnig.
From this limited information my most probably cause is memory access time. There is single DRAM contoller fetching graph (and embedding) parts from the memory, causing a bottleneck. This is probably not solvable unless you change the algorithm. If you are okay with higher memory consumption, you can consider an implementation that caches random walks in the memory.
On completely different issue, parameters you use seem quite a bit off. Why are you chaning the defaults?
You mean -nwalks 1000 -walklen 5 -window 3? I just playing with parameters to see how it behaves.
If you want to see where the problem comes from, I would recommend running the process under GNU perf (tutorial here http://www.brendangregg.com/perf.html). I would expect that the running time is dominated with the memory access.