How can an interaction design matrix be created from categorical variables?

前端 未结 2 790
[愿得一人]
[愿得一人] 2021-01-20 15:31

I\'m coming from mainly working in R for statistical modeling / machine learning and looking to improve my skills in Python. I am wondering the best way to create a design m

相关标签:
2条回答
  • 2021-01-20 15:50

    Being now faced with a similar problem of wanting an easy way of integrating specific interactions from a baseline OLS model from the literature to compare against ML appraches, I came across patsy (http://patsy.readthedocs.io/en/latest/overview.html) and this scikit-learn integration patsylearn (https://github.com/amueller/patsylearn).

    Below, how the interaction variables could be passed to the model:

    from patsylearn import PatsyModel
    model = PatsyModel(sk.linear_model.LinearRegression(), "Play-Tennis ~ C(Outlook):C(Temperature) + C(Outlook):C(Humidity) + C(Outlook):C(Wind)")
    

    Note, that in this formulation you don't need the OneHotEncoder(), as the C in the formula tells the Patsy interpreter that these are categorical variables and they are one-hot encoded for you! But read more about it in their documentation (http://patsy.readthedocs.io/en/latest/categorical-coding.html).

    Or, you could also use the PatsyTransformer, which I prefer, as it allows easy integration into scikit-learn Pipelines:

    from patsylearn import PatsyTransformer
    transformer = PatsyTransformer("C(Outlook):C(Temperature) + C(Outlook):C(Humidity) + C(Outlook):C(Wind)")
    
    0 讨论(0)
  • 2021-01-20 15:51

    If you use the OneHotEncoder on your design matrix to obtain a one-hot design matrix, then interactions are nothing other than multiplications between columns. If X_1hot is your one-hot design matrix, where samples are lines, then for 2nd order interactions you can write

    X_2nd_order = (X_1hot[:, np.newaxis, :] * X_1hot[:, :, np.newaxis]).reshape(len(X_1hot), -1)
    

    There will be duplicates of interactions and it will contain the original features as well.

    Going to arbitrary order is going to make your design matrix explode. If you really want to do that, then you should look into kernelizing with a polynomial kernel, which will let you go to arbitrary degrees easily.

    Using the data frame you present, we can proceed as follows. First, a manual way to construct a one-hot design out of the data frame:

    import numpy as np
    indicators = []
    state_names = []
    for column_name in df.columns:
        column = df[column_name].values
        one_hot = (column[:, np.newaxis] == np.unique(column)).astype(float)
        indicators.append(one_hot)
        state_names = state_names + ["%s__%s" % (column_name, state) for state in np.unique(column)]
    
    X_1hot = np.hstack(indicators)
    

    The column names are then stored in state_names and the indicator matrix is X_1hot. Then we calculate the second order features

    X_2nd_order = (X_1hot[:, np.newaxis, :] * X_1hot[:, :, np.newaxis]).reshape(len(X_1hot), -1)
    

    In order to know the names of the columns of the second order matrix, we construct them like this

    from itertools import product
    one_hot_interaction_names = ["%s___%s" % (column1, column2) 
                                 for column1, column2 in product(state_names, state_names)]
    
    0 讨论(0)
提交回复
热议问题