Art
Art
Yes, some of the docs are scarce, and in need of more content, e.g. examples & explanations. If you have something in mind, please, let us know or send updates.
You can try to replicate PCA example with Iris dataset to show LDA dimetionality reduction mode, see this [scikit-learn example]( https://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_vs_lda.html) or [this one](https://www.apsl.net/blog/2017/07/18/using-linear-discriminant-analysis-lda-data-explore-step-step/). If you want to show how...
Basically, your article does what I proposed above. However, the post's part on dimensionality reduction is convoluted and unrelated to LDA. The scikit example more straightforward about the reduction. I...
Use `maxoutdim` keyword argument when you're calling `fit`.
Current PCA implementation targets a dimensionality reduction of the data rather than orthogonal transformation of it. Thus, you need to disable reduction step by setting maximum value for `pratio` parameter.
I hope so, until then can use [TSne.jl](https://github.com/lejon/TSne.jl).
Let X and Y be two samples of dimension 5 and 6 correspondingly. Package accepts data in column-major order, so samples are columns. ```julia julia> size(X) (5, 1000) julia> size(Y)...
I'm not sure, but @lindahua definitely knows why. After some digging, this can be relevant: https://github.com/JuliaStats/Roadmap.jl/issues/4#issuecomment-32812291 My understanding that `transform` is used in a dimensionality reduction context, `predict` in other...
For some methods `transform` comes with an inverse operation, i.e. `reconstruct`. Not a perfect name but it gives a hit to an appropriate action (in scikit-learn, it's called `inverse_transform`). It...
The optimization problem you've mentioned is solved by a kernel ridge regression. See "3.1 Estimation of the Pre-Image Map" from ["Learning to find pre-images", Bakır, 2004](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.420.6617).