Random number quality impacts performance
I haven't checked all the solutions but for C and C++ at least, the random number initialization probably dominates their otherwise should-be-the-same speed.
The C solution uses rand(), with an internal state of approximately 40 bytes.
The C++ solution uses the mersenne twister, internal state 5000 bytes.
While the MT engine is fine for getting some fantastic randomness, the up-front cost of buying that randomness is probably cascading through the rest of the perf tree - and I bet it's the same for many other languages that should be faster if they didn't have to initialize a random engine.
(Checking now while I'm writing this: Yeah C# uses Random.Shared.Next, which IIRC does not call rand() but instead has a different slower-but-better engine.)
- I assume the purpose of the (single) call to
rand()is just to prevent the compiler from optimizing a bunch of stuff away. - I assume the purpose of the (single) integer read from
argcis also to prevent the compiler from optimizing a bunch of stuff away.
I suggest an alternative: Ditch rand from every implementation and replace with e.g. the square root of argc, a second number read from argv, or some other number that isn't available to the compiler but should be reasonably consistent across implementations. (Probably not argc+1, that looks like it's optimizable)
Replacing this with a argv input is a good idea 👍