Andrew Landgraf
Andrew Landgraf
See table here: [http://en.wikipedia.org/wiki/Generalized_linear_model#Link_function](http://en.wikipedia.org/wiki/Generalized_linear_model#Link_function) - [ ] Exponential/Gamma distribution - [ ] Inverse Gaussian - [x] Categorical/Multinomial
Modify convergence criteria to percent change in loss. Need to determine what a good percent cutoff is.
Initialization does not take weights into account (unless there is normalization). Need to think of a general way to account for weights in the initialization. Maybe scale values by square...
``` R rows = 100 cols = 10 set.seed(1) mat_np = outer(rnorm(rows), rnorm(cols)) mat = matrix(rpois(rows * cols, c(exp(mat_np))), rows, cols) missing_mat = matrix(runif(rows * cols)
Currently, the computational complexity is the same regardless of `majorizer`. The time per iteration can be sped up (to about `logisticPCA` speed) when `majorizer = "all"`.
E.g. column 1 is continuous, column 2 is binary, column 3 is counts
Tipping, M. E. (1998). Probabilistic visualisation of high-dimensional binary data. NIPS 11, pp. 592-598.
I have written different method functions for lpca, lsvd, and clpca. I can probably combine many of them. The methods to combine are: - [ ] print - [ ]...