nupic-legacy
nupic-legacy copied to clipboard
Bug in adding synapses in temporal_memory
The mechanism for adding synapses to a segment in temporal_memory has a subtle bug. When learning on a segment self.maxNewSynapseCount - len(activeSynapses)
new synapses are added.
This logic doesn't allow you to learn unions. Instead it should always try to add self.maxNewSynapseCount new synapses as long as you don't get over maxSynapsesPerSegment. The logic should be something like min(self.maxNewSynapseCount, maxSynapsesPerSegment - numSynapsesOnSegment)
.
A slightly different alternate is to add self.maxNewSynapseCount new synapses and then, if you are over maxSynapsesPerSegment you can remove least used or lowest permanence synapses to make room.
Making this change requires some testing to make sure we don't unwittingly lower performance on existing benchmarks like hotgym, taxi and NAB. The change would need to be made both on CPP and Python versions.
I agree this change would make sense. This newSynapseCount
(TP) / maxNewSynapseCount
(TM) parameter has always seemed a little goofy. It's actually treated as a "desired active synapse count". So the TM / TP behave very oddly if this value is less than the activationThreshold
.
It looks like the TP also does it this way: https://github.com/numenta/nupic/blob/589c209ac29a5fec691493418ed182e3951442ed/src/nupic/research/TP.py#L2928
Comportex also does it this way (link, you can see that the segment's excitation exc
is being subtracted from the count new-syns
), and it chose your first proposal for hitting maxSynapsesPerSegment
(not doing cleanup).
With this philosophy of "Segments should keep adding synapses", there's other Temporal Memory behavior that now seems wrong.
In the "activate predicted column" path, synapses are never grown. We only ever grow synapses in bursting columns. We reinforce / punish existing synapses in both cases, but growth only happens with bursting.
@subutai and I discussed offline, and the correct behavior is that a learning segment should always grow new synapses if possible.
These will be pretty major changes to the algorithm. (EDIT: but the code change will be small)
@subutai @mrcslws Sounds like the "major changes to the algorithm" are just a side-effect of investigating this bug. Seems to me like it should be reported in a new issue and tracked separately from this bug.
I'm not sure "major change to the algorithm" is the appropriate phrase here 😄 It's a relatively small bug fix that should improve robustness to noise in some cases, though performance improvements are not clear yet.
I'll trust your judgements on this. Let me know if you think this should be tracked in another issue.
I think they should be handled in the same PR - to me the two are intricately tied together. The two things mentioned are related to the same underlying problem of not adding synapses correctly. I'll rephrase the issue to make this clearer.
Watching this...
@subutai Status update: I've made both of these changes, and I've found that:
- Enabling synapse growth in
activatePredictedColumn
doesn't break anything, aside from some unit tests that were verifying that this doesn't happen. - Growing more synapses (
min(self.maxNewSynapseCount, maxSynapsesPerSegment - numSynapsesOnSegment)
, as you mention) causes a lot more predicted inactive columns. This causes 6 failures in the extensive TM tests (with my change in the new implementation), and 18 failures in the similar etm algorithm tests (with your change in the phases implementation).
So to clarify. The above is an attempt to follow what @rhyolight said and split out the "growthOnPredictions effort" from the bug reported by @subutai ? What is left here in this bug after splitting the above out, is:
- Instead of adding this amount of new Synapses:
self.maxNewSynapseCount - len(activeSynapses)
; What should be added is: - numSynapsesToBeAdded = min(self.maxNewSynapseCount, maxSynapsesPerSegment - numSynapsesOnSegment)
Is this correct?
@cogmission Yes. I should've listened to @rhyolight in the first place 😄 .