Use Melissa E. O'Neill's pcg for rng and Lemire's unbiased bounded rand
Mersenne Twister's state size is huge (kb's of state), instead we'll swap with a pcg. The pcg's state is 128 bits and has a equidistributed 2^64 period (plenty for a quick rng). To replicate std::distribution we use lemire's unbiased bounded random method.
Additionally we seed using the address of the rng struct, in theory this is randomly allocated somewhere, but since this is a tiny device with not a lot of of address space, this likely only flips the lower bits. It still uses the motion accelerometers 3-axis data as before, but uses the three uint32_t results to create one uint64_t. The rng shuffles itself a bit further by replacing its state using the initial seeding.
Build checks have not completed. Possible reasons for this are:
- The checks need to be approved by a maintainer
- The branch has conflicts
- The firmware build has failed
https://www.youtube.com/watch?v=ia8Q51ouA_s&t=78
Seriously though, this looks great :) really good spot that the mersenne twister is way too big for this application (and that's probably the cause of a crash I've been seeing recently)
I'm not too experienced with random number generation techniques so I need to read over them properly to review, but at a quick glance this looks very promising
Yeah I'm personally not a fan of using MT ever given that there are prng functions like this floating around which are plenty good enough for purposes like these and tend to be faster as well. I'm suprised the size diff isn't larger.
@mark9064 the alternative rng I would use is romu, but that is slightly larger and from memory it technically has a potential bad 0 state*. I've liked using pcg ever since I learned about it years ago, it's comparably fast to MT but doesn't have the huge state cost and setup time, so it's nice to be able to randomly create one on the fly for rng purposes.
There's also a potentially similar issue with the Two game, rand() depending on the implementation may create a massive table.
Currently rewriting, after some thought I realized that I could create a "controller" type object inside SystemTask which would allow any screen or app to retrieve some prng like data. This way at system boot we can seed a global rng object, then following apps can seed their rngs from that master rng object. We can potentially make use of any form of system / sensor noise or even the included bluetooth rng for a really well seeded prng object at boot once and then leave it be.
Since the cost of a reference is equivalent to a pointer and the overall prng size is not much larger, even if the global prng is not well seeded we can generate a better seeded prng object for only the cost of an additional 64 bits per screen / application.
The benefit being no single app needs to write their own rng object, and the initialization cost is extremely low.
The nRF52 has RNG hardware, why not use that?
@Avamander that's the bluetooth device right? I did a quick search and spotted that there's an RNG interrupt, so I assume there's some form of secure rng available. I would use it, if I knew how.
But, if that wouldn't work for whatever reason, so long as there's some noisy data available we should be able to pack the rng with more than enough state that no one would ever notice.
OK I've looked into it a bit. Firstly for cryptographically secure RNG we have ble_ll_rand(), so that base is covered. This RNG is slow though so we only want to use it where needed.
Regarding general PRNG:
While I think PCGs look great, I think the C++ standard LCG will be more than enough for the Dice app.
So I think this can be resolved with swapping to std::minstd_rand.
The main reason why I think this is better is the simplicity of this solution - we don't need to carry and maintain our own RNG implementation.
However I'm not against including PCG, but I think if we choose to implement it we should implement the proper interface https://en.cppreference.com/w/cpp/named_req/RandomNumberEngine.html That way we can continue to use the standard library components around it.
Regarding having a central RNG object that's shared, I think that could make sense. This would mean that apps avoid having to seed their own randomness and potentially minor memory savings if we ever reach a point where we have many RNGs.
@mark9064 Melissa has an implementation that follows the standard pattern. I personally just don't see the need to bring in all the template mess since prng functions are fundamentally state machines where part of their state is hidden and the rest transformed. std::minstd_rand is really outdated from what I remember, if I remember correctly it sometimes is used as rand(), the small tweaks that the pcg makes to an lcg does work wonders in terms of how random the data is.
However, that said, I'm pretty sure that the c++ library for pcg has smaller versions which would probably work just fine. That might be worthwhile since I don't imagine that the hardware is 64 bit, so that might be a plus.
Well actually this will be an easy switch, since the constants for each size are in the c++ header here: https://github.com/imneme/pcg-cpp/blob/master/include/pcg_random.hpp
So I'll take a minute to make that edit.
@mark9064 I believe looking at ble_ll_rand that realistically we're grabbing random data using ble_ll_rand_data_get((uint8_t *)xsubi, sizeof(xsubi));? It seems like jrand48 is a transform to take the random data from 48 bits of random data to output 32. We could just use ble_ll_rand_data_get to fill the pcg fairly easily.
It looks like in a way we could create a similar approach to ble_ll_rand where instead of 48 bits we instead grab however many to instantiate our own desired rng.
On second look, I absolutely would not bother using minstd it's lcg function is literally just previous_state * 48271, that would be a horrible idea, it has a catastrophic failure should the state ever be 0.
Update: struggling to get access to ble_ll_rand or ble_ll_rand_data_get, I'll figure it out tommorow....
This RNG is slow though so we only want to use it where needed.
The relative slowness does not matter in these scenarios, especially for a dice app and it avoids the significant memory and storage cost of shipping an entire PRNG.
@Avamander the cost of the pcg should only be 128 bits of memory to hold the state at most (depends on the size choice of the implementation) for any instance being used (or double that in my rough conceptual model where there were two at work, one for the screen, one to generate new seeds) and only a few bytes for the instructions for the two functions.
@mark9064 I'm not sure why, but I thought when you brought up LCGs that minstd might be a slight improvement on one or some form of variation. I remembered the history that a lot of libraries would use minstd as rand, but I didn't remember that it's one of the first examples of an LCG with a good multiplier. The issue with LCGs have issues with how statistically "random" they behave. It's not unique to minstd, since it's literally just a fused multiply add operation what happens is they'll exhibit two predictable behaviors 1) Their outputs will always be odd or 2) Their outputs will toggle between even and odd and if they're missing an increment (which I'm suprised minstd doesn't) you'll have a looping 0 state. Now this wouldn't be so bad given a function to unbias the results (which is a must for dice rolling or some measure of "fairness"), but the reality is you'd be putting the load bearing part of the rng results on the unbiasing function and not the prng itself. I wouldn't be too sure of how "random" the results would actually be. However there's a dangerous potential here, which is that the LCG gets seeded to 0, and then the unbiasing function infinitely loops. (That's the horrible idea part, I didn't explain the other day).
Poking around the bluetooth specification I tried to find out what jrand48's implementation is. It may be https://www.ibm.com/docs/en/zos/3.2.0?topic=functions-jrand48-pseudo-random-number-generator So similar issue with minstd except the state size is slightly larger.