neural-redis
neural-redis copied to clipboard
Maybe wrong matrix index calculation ?
Hello ! I'm refactoring a bit nn.c/h and I think I found a mistake in the calculation of the matrix index for weight, gradient, sgradient, pgradient and delta:
#define WEIGHT(net,l,i,j) (net)->layer[l].weight[((j)*(net)->layer[l].units)+(i)]
#define GRADIENT(net,l,i,j) (net)->layer[l].gradient[((j)*(net)->layer[l].units)+(i)]
#define SGRADIENT(net,l,i,j) (net)->layer[l].sgradient[((j)*(net)->layer[l].units)+(i)]
#define PGRADIENT(net,l,i,j) (net)->layer[l].pgradient[((j)*(net)->layer[l].units)+(i)]
#define DELTA(net,l,i,j) (net)->layer[l].delta[((j)*(net)->layer[l].units)+(i)]
Looking at the allocation code for them it seems that the correct way to index is to multiply "units" by "i" and then add "j":
#define WEIGHT(net,l,i,j) (net)->layer[l].weight[((i)*(net)->layer[l].units)+(j)]
...
Attached are the refactored nn.c/h so far (compiles but doesn't seem to work properly). The idea is to allocate units rounded up by SIMDF for alignment and be able to do full calculations directly using simd without left overs.
Looking at the comments on struct AnnLayer it seems that I'm right and indeed the matrix index calculation is not correct.
* Data structures.
* Nets are not so 'dynamic', but enough to support
* an arbitrary number of layers, with arbitrary units for layer.
* Only fully connected feed-forward networks are supported. */
struct AnnLayer {
int units;
float *output; /* output[i], output of i-th unit */
float *error; /* error[i], output error of i-th unit*/
float *weight; /* weight[(i*units)+j] */ ///////<<<<<< (i*units)+j but later on using (j*units)+i
/* weight between unit i-th and next j-th */
float *gradient; /* gradient[(i*units)+j] gradient */
float *sgradient; /* gradient for the full training set */
/* only used for RPROP */
float *pgradient; /* pastgradient[(i*units)+j] t-1 gradient */
/* (t-1 sgradient for resilient BP) */
float *delta; /* delta[(i*units)+j] cumulative update */
/* (per-weight delta for RPROP) */
};