Plot PCA loadings and loading in biplot in sklearn (like R's autoplot)

前端 未结 3 592
清酒与你
清酒与你 2020-12-07 21:59

I saw this tutorial in R w/ autoplot. They plotted the loadings and loading labels:

autoplot(prcomp(df), data = iris, colour = \'S         


        
相关标签:
3条回答
  • 2020-12-07 22:32

    Try the ‘pca’ library. This will plot the explained variance, and create a biplot.

    pip install pca
    
    from pca import pca
    
    # Initialize to reduce the data up to the number of componentes that explains 95% of the variance.
    model = pca(n_components=0.95)
    
    # Or reduce the data towards 2 PCs
    model = pca(n_components=2)
    
    # Fit transform
    results = model.fit_transform(X)
    
    # Plot explained variance
    fig, ax = model.plot()
    
    # Scatter first 2 PCs
    fig, ax = model.scatter()
    
    # Make biplot with the number of features
    fig, ax = model.biplot(n_feat=4)
    
    0 讨论(0)
  • 2020-12-07 22:42

    I found the answer here by @teddyroland: https://github.com/teddyroland/python-biplot/blob/master/biplot.py

    0 讨论(0)
  • 2020-12-07 22:43

    You could do something like the following by creating a biplot function.

    Nice article here: https://towardsdatascience.com/pca-clearly-explained-how-when-why-to-use-it-and-feature-importance-a-guide-in-python-7c274582c37e?source=friends_link&sk=65bf5440e444c24aff192fedf9f8b64f

    In this example I am using the iris data:

    import numpy as np
    import matplotlib.pyplot as plt
    from sklearn import datasets
    from sklearn.decomposition import PCA
    import pandas as pd
    from sklearn.preprocessing import StandardScaler
    
    iris = datasets.load_iris()
    X = iris.data
    y = iris.target
    
    # In general, it's a good idea to scale the data prior to PCA.
    scaler = StandardScaler()
    scaler.fit(X)
    X=scaler.transform(X)    
    pca = PCA()
    x_new = pca.fit_transform(X)
    
    def myplot(score,coeff,labels=None):
        xs = score[:,0]
        ys = score[:,1]
        n = coeff.shape[0]
        scalex = 1.0/(xs.max() - xs.min())
        scaley = 1.0/(ys.max() - ys.min())
        plt.scatter(xs * scalex,ys * scaley, c = y)
        for i in range(n):
            plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5)
            if labels is None:
                plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, "Var"+str(i+1), color = 'g', ha = 'center', va = 'center')
            else:
                plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center')
        plt.xlim(-1,1)
        plt.ylim(-1,1)
        plt.xlabel("PC{}".format(1))
        plt.ylabel("PC{}".format(2))
        plt.grid()
    
    #Call the function. Use only the 2 PCs.
    myplot(x_new[:,0:2],np.transpose(pca.components_[0:2, :]))
    plt.show()
    

    RESULT


    0 讨论(0)
提交回复
热议问题