Fix cc
@Dradeliomecus : here a first draft for fixing CC. At the moment no test passing. All implementation are failling for the tests.
@alejoe91 @Dradeliomecus : could you have a look to this ?
I fixed some stuff both in numpy and numba. They give now very somilar results but not exatctly even they should!!
I think I have a pretty good idea of where the problem is:
I made a random spike train for 3 units and plotted their auto and cross-correlograms, here is the result:

As you can see, the result is the same for the auto-correlograms, and half of the cross-correlograms (basically corr[i, j] matches between numpy and numba if i <= j).
For i > j, there is a difference between numba and numpy, where numba is the exact reflect of corr[j, i], but there is a difference for numpy.
Hope that helps!
And now I known why! Because in numba we just flip [i, j, :] to get [j, i, :] but this is theorically wrong because : the rules is include the left border and exclued the right border but by reversing we don't do that for half of the matrix!!!
I think I will only test only half of the matrix so.
Maybe I will do more refactoring to enabable both odd and even bins number.
Ah good catch!
I think we can easily remedy the situation. We need to replace range(i, ...) by range(...) and remove the flip here:
https://github.com/SpikeInterface/spikeinterface/blob/99003fc1cf99ce322ad7eaff5847a9ab2f4477f9/spikeinterface/postprocessing/correlograms.py#L403
This will make the computation twice slower though.
I think this is ready to merge.
Shouldn't method="numpy" and method="numba" have the exact same output?
I think it can be fixed with what I wrote a week ago no?
Shouldn't method="numpy" and method="numba" have the exact same output? No. Because numba just flip the half part of the matrice. And There are still some border effect.
I think it can be fixed with what I wrote a week ago no? And doubling the time of numba only for bin edge is a none sens I think. Lets keep this which is a better implemenation than the one we have.
I guess then the only fix would be to compute all the correlograms using the spike vector, but that would need an entire rewrite and we would lose parallelization.
Probably not worth it!
ok, should I merge as is?
ok, should I merge as is?
I'm good with it!