A sklearn compatible python package for multi-view dimensionality reduction including multi-view canonical correlation analysis and AJIVE.
git clone https://github.com/idc9/mvdr.git
python setup.py install
Note the mvdr.ajive assumes you have installed ya_pca
which can be found at https://github.com/idc9/ya_pca.git.
from mvdr.mcca.mcca import MCCA
from mvdr.mcca.k_mcca import KMCCA
from mvdr.toy_data.joint_fact_model import sample_joint_factor_model
# sample data from a joint factor model with 3 components
# each data block is X_b = U diag(svals) W_b^T + E_b where
# where the joint scores matrix U and each of the block loadings matrices, W_b, are orthonormal and E_b is a random noise matrix.
Xs, U_true, Ws_true = sample_joint_factor_model()
# fit MCCA (this is the SUMCORR-AVGVAR flavor of multi-CCA)
mcca = MCCA(n_components=3).fit(Xs)
# MCCA with regularization
mcca = MCCA(n_components=3, regs=0.1).fit(Xs) # add regularization
# informative MCCA where we first apply PCA to each data matrix
mcca = MCCA(n_components=3, signal_ranks=[5, 5, 5]).fit(Xs)
# kernel-MCCA
kmcca = KMCCA(n_components=3, regs=.1, kernel='linear')
kmcca.fit(Xs)
Additional documentation, examples and code revisions are coming soon. For questions, issues or feature requests please reach out to Iain: [email protected].
We welcome contributions to make this a stronger package: data examples, bug fixes, spelling errors, new features, etc.