Tom
Tom
So here we can see how the threads are running. It appears some task get's started on the main thread then runs on the other threads but there is a...
I just ran #2072 on the fast computer after two runs: ``` all not held missing: 948, retrying after delay all not held missing: 948, retrying after delay ``` Half...
performance stats ``` Performance counter stats for 'sim2h_server -p 9001 -s 20': 1,046,895.25 msec task-clock # 1.738 CPUs utilized 23,404,787 context-switches # 22356.384 M/sec 77,490 cpu-migrations # 74.019 M/sec 84,520...
Here's a second run. Both runs panic at the end ``` Performance counter stats for 'sim2h_server -p 9001 -s 20': 1,124,394.82 msec task-clock # 1.816 CPUs utilized 24,123,768 context-switches #...
## Network tests So the current branch seems to be stopping at exactly 948 not held (happened twice). I thought it would be good to check out the network traffic...
The changes to sim2h-futures4 are using the threads a lot better but there's still a long way to go. I'll dig into this more tomorrow but I want to figure...
This is from a direct message stress test: 
Run 99 nodes 20 instances ``` Performance counter stats for process id '45757': 8,305,189.12 msec task-clock # 6.929 CPUs utilized 475,991,192 context-switches # 57312.506 M/sec 19,424,846 cpu-migrations # 2338.881 M/sec...
Run 99 nodes 30 instances ``` Performance counter stats for process id '9047': 7,323,919.52 msec task-clock # 7.475 CPUs utilized 427,504,936 context-switches # 58371.063 M/sec 18,495,039 cpu-migrations # 2525.293 M/sec...
Direct messages are timing out at about 500 nodes This is what sim2h outputs: ``` [2020-01-30T05:19:18Z ERROR sim2h] VERY SLOW metric - sim2h-state-build_handle_new_entry_data - 1082 ms [2020-01-30T05:19:18Z ERROR sim2h] VERY...