Eric Hunsberger
Eric Hunsberger
Should also test using higher-dimensional comm channels, since with 1-D it's possible to achieve the desired sparsity of weights just by having that same sparsity in the decoders.
Thanks @IgnacioRubioScola, this is a good suggestion to add these details to the documentation. We've added this issue to our backlog, and plan to do this as part of the...
The reason we can't do arrays is that we want the `NeuronType` to be able to work for different ensembles, which could have different numbers of neurons so we don't...
> I'm thinking that the `LIFRateSafe()` option (proposal 3) is more of a band-aid solution than `LIFRateUnbiased()` (instance of proposal 2). While the safe option might not be as extreme...
I did come up with a `LIFRateNorm` neuron type, that bridges `LIFRateSafe` and `LIFRateUnbiased`. For small values of `max_x` (e.g 1.1), it's equivalent to `LIFRateSafe`, but as `max_x` grows, it...
Would we also not generate encoders for such ensembles, and fail if someone tries to do e.g. an ensemble->ensemble connection with them? Things might look a bit different if we...
> Again I don't disagree, and hopefully I haven't come off as disrespectful when making style choices. I do think a lot about it and am not making changes arbitrarily...
My Numpy was already quite new (I think it upgraded from 13.0 to 13.1) so I don't think it was that. Could have been something else that upgraded, though. If...
No, I don't believe we ever found the exact package that is responsible. Is it possible for you to upgrade your matplotlib? If you can't, then unfortunately I don't think...
I think this is because of how we do the intercepts. We assume that the intercept sets the middle of the curve, so any rates below half of `1 /...