Paul Brodersen
Paul Brodersen
> Add argument `normalized` Could you expand a bit on the motivation, or provide some references and/or applications? > Fix floating point issue. The current implementation fails to calculate the...
> > > Speed-up of MVN entropy estimate for 1D variables by using the variance instead of the covariance matrix calculation. > > > > > > Did you time...
When you have time, could you expand a bit on the motivation for the normalization, or provide some references and/or applications? I don't want to support something even I don't...
Hi, thanks for raising the issue. Both packages implement the Kozachenko-Leonenko estimator for entropy and the Kraskov et al estimator for mutual information. At first I thought the difference might...
```python np.mean(digamma(nx+1) + digamma(ny+1)) != np.mean(digamma(nx+1)) + np.mean(digamma(ny+1)) ``` The expression in the paper includes the left-hand term, the code in scikit-learn the term on the right.
Only if nx and ny are uncorrelated. Which they are not.
Might be having a bit of brain fart though. I am having a cold, and every thought takes ages.
I think you are right. Had to run some numbers on the ipython prompt to help my reduced mental capacities understand basic math again. In that case, I don't know...
Actually, I don't think there is a difference at all. The definitional or so-called "naive" estimator of the mutual information is: I(X;Y) = H(X) + H(Y) - H(X,Y) If we...
That continues to be a strong point. However, I am by now fully convinced that the entropy computations are fine: ```python import scipy.stats as st from entropy_estimators import continuous distribution...