pypls
pypls copied to clipboard
Implementation of PLS-DA and OPLS-DA for high dimensional data, like MS in metabolomics.
pypls
This package implements PLS-DA and OPLS-DA for analysis of high-dimensional data derived from, for example, mass spectrometry in metabolomics. The visualization of score plots, S-plot, jack-knife confidence intervals for loading profile, and mis-classification number in cross validation are also implemented.
Prerequisites
This package is created by Python 3.7
, with the following packages
required:
numpy 1.17.2
scipy 1.3.1
matplotlib 3.1.3
tqdm 4.64.0
All of these or newer version packages can be installed by using pip
.
Important
This package is only workable for binary classifications. Thus, if three or more classes are in the data, this package can't handle that. An alternative way is pair-wise classifications. As Prof. Richard G. Brereton pointed out in his paper[1], binary classification is recommended for PLS related methods, and multi-class classification problems are not suitable for PLS.
Install
The latest release can be downloaded
here.
Uncompress the package and set Python
working directory there.
Since current version is not packaged, all modules must be run
under the working directory.
Running the codes
# import cross validation module
import cross_validation
# import plotting functions
import plotting
-
Initialize cross validation object for 10-fold cross validation using OPLS-DA.
cv = cross_validation.CrossValidation(kfold=10, estimator="opls")
Parameters:
kfold
: Fold in cross validation. For leave-one-out cross validation, set it ton
, is the number of samples.
estimator
: The classifier, valid values areopls
andpls
. Defaults toopls
.
scaler
: scaling of variable matrix.-
uv
: zero mean and unit variance scaling. -
pareto
: Pareto scaling. This is the default. -
minmax
: min-max scaling so that the range for each variable is between 0 and 1. -
mean
: zero mean scaling.
-
-
Fit the model.
cv.fit(X, labels)
X
is the variable matrix with sizen
(rows) byp
(columns), wheren
is number of samples andp
is number of variables.labels
can be numeric values or strings, with number of elements equals ton
. -
Permutation test [5, 6]
To identify whether the constructed model is overfitting, permutation test is generally applied, by repeatedly simply randomizing labels and performing the model construction and prediction on the randomized labels many times. This package adopts same strategy, which uses
cv.permutation_test()
Parameters:
num_perms
: Number of permutations. Defaults to10000
.
To get p value, the significance of the constructed model, runcv.p(metric="q2")
Parameters:
"q2"
: Q2.
"error"
: Mis-classification error rate.IMPORTANT
p value is calculated as [7]
p = (No. of permutation error rate <= normal error rate + 1) / (n + 1)
if misclassification rate (i.e., parametererror
) is used as the metric, or
p = (No. of permutation Q2 >= normal Q2 + 1) / (n + 1)
if Q2 (i.e., parameterq2
) is used, andn
is the number of permutations. -
Visualization of results.
# construct the plotting object plots = plotting.Plots(cv)
-
Number of mis-classifications at different principal components:
plots.plot_cv_errors()
-
Cross validated score plot:
plots.plot_scores()
NOTE
For OPLS-DA, predictive scorestp
vs the first orthogonal scoresto
will be shown; for PLS, the first and second component will be shown. -
S-plot (only suitable for OPLS-DA).
plots.splot()
-
Loading profile with Jack-knife confidence intervals (only suitable for OPLS-DA).
means, intervals = plots.jackknife_loading_plot(alpha=0.05)
Where
alpha
is significance level, default is0.05
.means
are mean loadings, andintervals
are Jack-knife confidence intervals. -
Permutation test plot
plots.plot_permutation_test(metric="q2")
Parameters:
-
"q2"
: Q2. -
"error"
: Mis-classification error rate.
Two subplots will be generated to show the permutation test results:
- [x] Correlation of permuted y to original y vs Model metric.
- [x] Distribution of permutation model metric which is used to calculate p value.
-
NOTE
For all above plots, setsave_plot=True
andfile_name=some_string.png
can save each plot tosome_string.png
withdpi=1200
. -
- Model assessment.
To check the# R2X cv.R2X_cum # Q2 cv.q2 # R2y cv.R2y_cum # Number of mis-classifications cv.min_nmc
R2X
andR2y
of the optimal component, i.e.,cv.optimal_component_num
, callcv.R2X
andcv.R2y
. - Access other metrics.
- Cross validated predictive scores:
cv.scores
- Cross validated predictive loadings:
cv.loadings_cv
- Optimal number of components determined by cross
validation:
cv.optimal_component_num
- Cross validated predictive scores:
- Prediction of new data.
To predict the class, usepredicted_scores = cv.predict(X, return_scores=False)
This will output values ofpredicted_groups = (predicted_scores >= 0).astype(int)
0
and1
to indicate the groups of samples submitted for prediction.cv
object has the attributegroups
storing the group names which were assigned inlabels
input for training. To access the group names after prediction, use
Setprint([cv.groups[g] for g in predicted_groups])
return_scores=True
will return predictive scores for OPLS. - Other methods.
cv
provides a methodreset_optimal_num_component
to reset the optimal number of components manually, instead of defaultedly at the minimal number of mis-classification.cv.reset_optimal_num_component(n)
Author
Nai-ping Dong
Email: [email protected]
License
This project is licensed under the Apache 2.0 License - see the LICENSE for details.
References
[1] Brereton RG, Lloyd GR. Partial least squares discriminant analysis:
taking the magic away. J Chemometr. 2014, 18, 213-225.
Link
[2] Trygg J, Wold S. Projection on Latent Structure (O-PLS). J
Chemometr. 2002, 16, 119-128.
Link
[3] Trygg J, Wold S. O2-PLS, a two-block (X-Y) latent variable regression
(LVR) method with a integral OSC filter. J Chemometr. 2003, 17, 53-64.
Link
[4] Wiklund S, et al. Visualization of GC/TOF-MS-Based Metabolomics
Data for Identification of Biochemically Interesting Compounds Using
OPLS Class Models. Anal Chem. 2008, 80, 115-122.
Link
[5] Bijlsma S, et al. Large-Scale Human Metabolomics Studies: A Strategy for
Data (Pre-) Processing and Validation. Anal Chem. 2006, 78, 2, 567–574.
Link
[6] Ojala M, et al. Permutation Tests for Studying Classifier Performance.
J Mach Learn Res. 2010, 11, 1833−1863.
Link
[7] North BV, et al. A Note on the Calculation of Empirical P Values from
Monte Carlo Procedures. Am J Hum Genet. 2002, 71(2), 439–441. Link