hnn-core
hnn-core copied to clipboard
Moving towards tests independent of legacy GUI output
This is a follow-up to #221 and related to #233, edited as we move towards a PR.
In order to write good tests for the new external drives API, we should decide on gold standard datasets. The current implementation uses HNN GUI output based a params
-file that generates a fixed sequence of _ArtificialCell
s (and thus corresponding gid
s; the _legacy_mode
flag in Network
is needed to match this behaviour).
Some questions to answer here, in no particular order:
- [ ] At what level should seeding occur? Would it be reasonable for each drive to have a unique seed (e.g. based on its unique
name
)? This would allow adding drives in any order, yet retaining the event times. Alternatively, there should be a global seed for eachNetwork
(not the currentgid
-based seeds). - [x] What do we consider proof-positive that
hnn-core
can replicate GUI results? Are the current (#221) examples sufficient? Do we need more cases before we are satisfied that the new API can replace the old? - [ ] Should we have a single
dpl.txt
-file generated using all the possible drives (and biases) turned on? Or rather create several test datasets based on realistic use cases such as one forevoked
, one forpoisson
(PING gamma example, includes tonic bias), one forbursty
(possibly a new beta-example), etc. ? - [ ] ???
Or rather create several test datasets based on realistic use cases
I think this would be great to have!! Probably a collaboration with @rythorpe ?
Do we need more cases before we are satisfied that the new API can replace the old?
I would let folks use it for a bit. Would be great if we can use it in a class for teaching or something and see what issues people face. One thing we need to fix definitely is #239 before even talking about any replacements ;-) We can change the default to use the new API but still leave the option for the old behavior for a couple of months at least.
Would it be reasonable for each drive to have a unique seed
I like this option better because the order of adding the drives will not matter then ...
I like the idea of having multiple test datasets. Here's a list of possible ground-truth test datasets based off of drives/biases explored in the tutorials that we could create. Obviously, we don't need to test every combination of drive types so I've marked the examples I think we should use as the minimal number of necessary test datasets with *. Feel free to modify or add to this list.
- evoked*
- poisson (gamma via random excitation and intrinsic network E/I interactions)
- poisson + tonic bias (gamma modulated by tonic bias)*
- single bursty (gamma via rhythmic drive)
- single bursty (alpha via prox. rhythmic drive)
- multiple bursty (beta via coincident prox. and dist. rhythmic drive)*
To what extent is this issue still relevant? Modify the issue title and move to 0.4 if still relevant?
Maybe we can consolidate all of these seed + legacy mode related issues under one issue?
Agree!