Unified Connection Process
Objective of issue: Unified Connection Process should hide HW implementation details and support dense, sparse, delays, learning, ...
Lava version:
- [x] 0.4.0 (current version)
I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] documentation request
Current behavior:
- There is only a Dense process without delay or learning support
- The Dense process creates synapses even if weights are zero, which uses up resources unnecessarily.
Expected behavior:
• The internal representation of connectivity should be able to switch between sparse and dense, depending on size of the matrix and the fill-in ratio. This should be transparent to the user. • An initially-zero weight that should be learned can be represented by an "exists" value. When using a sparse representation, you can have an explicit zero, as opposed to implied zeroes for elements that aren't defined. For a dense matrix, we can either assume 100% fill-in or represent non-existent entries with an illegal value like NaN or Inf. • There needs to be an access method that allows individual elements to be set. Requiring the user to pass a dense matrix in all cases is onerous, especially if the matrix is large (large source and destination sets) and in fact sparse. • That same access method could allow the user to specify a delay for each connection. One "synapse" is a tuple (source neuron, destination neuron, weight, delay). You can think of there being two matrices, one for weights and one for delays. • Fixed-point configurations such as bit precision and exponent are implementation detail that burdens the user. (I don't care how numbers are represented. I just want my model to work.) These can be determined automatically by analyzing the computational graph as a whole. You could give the user the ability to specify a general optimization hint, like many compilers do. IE: favor precision or favor low-energy usage. Fitting into the available hardware of course trumps either of those. The user could specify a minimum acceptable precision, such that the simulation aborts with an error rather than sacrificing too much precision to fit.
SciPy sparse matrices could function as a drop-in replacement for NumPy dense matrices. Passing a weight matrix to the constructor should be optional. If passed, use the matrix as-is. Otherwise, the user could call connect(from, to, weight, delay). When this function is called, it will either update the existing matrix or create a new sparse matrix.
The constructor should also accept a delay matrix, which must have the same shape as the weight matrix. If dense, then any delay less than 1 can mark the associated connection as non-existent. This allows the weights to always be legitimate values, including 0.
This issue could be split. An initial implementation of a dedicated Sparse Process would already be valuable for users.