Matt Levine
Matt Levine
Do you have a recommendation for a cleaner approach? Currently we set the random seed just before using It to generate the random features, so it is reproducible (unless re-order/insert...
The basic approach of random feature models is to generate a random basis of functions, then fit their coefficients. So, in training, we need to first generate random parameters that...
It does get saved in the namelist as `rf_fix_ent_params` > On May 26, 2022, at 3:25 PM, Costa Christopoulos ***@***.***> wrote: > > > In that case I think we'll...
In the Random Features configuration, we make 2 calls to randn and 1 call to rand. When we do Orthogonal Random Features, we build a matrix G which is based...
Not sure—what is the question/plan at this point? Sent from my iPhone > On May 26, 2022, at 5:38 PM, Charles Kawczynski ***@***.***> wrote: > > > Is what's...
Ya so haakons use of passing a local RNG for a specific task (eg RF construction) looks like a good way to go. Basically it just makes the config more...
Haakons is much nicer, because it allows the state of the rng state to evolve. We don’t want to call my_randn(seed=1234) multiple times sequentially, because it will be the same...
So I'm a bit lost in this convo---but, I do think Haakon's original suggestion to pass around an RNG STATE is a good one. It allows us to distinguish random...
No—my point above is that we may need to do multiple calls to these seeded functions. Say I need to call these 5-10 times—-then I have to pick a different...
cc @nickhnelsen