eli5

Question about Permutation Importance on LSTM Keras

拟墨画扇 提交于 2020-07-19 11:01:29
问题 from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor import eli5 from eli5.sklearn import PermutationImportance model = Sequential() model.add(LSTM(units=30,return_sequences= True, input_shape=(X.shape[1],421))) model.add(Dropout(rate=0.2)) model.add(LSTM(units=30, return_sequences=True)) model.add(LSTM(units=30)) model.add(Dense(units=1, activation='relu')) perm = PermutationImportance(model, scoring='accuracy',random_state=1).fit(X, y, epochs=500, batch_size=8) eli5.show

How to get feature names from ELI5 when transformer includes an embedded pipeline

[亡魂溺海] 提交于 2020-07-07 05:53:43
问题 The ELI5 library provides the function transform_feature_names to retrieve the feature names for the output of an sklearn transformer. The documentation says that the function works out of the box when the transformer includes nested Pipelines. I'm trying to get the function to work on a simplified version of the example in the answer to SO 57528350. My simplified example doesn't need Pipeline , but in real life I will need it in order to add steps to categorical_transformer , and I will also

How to get feature names from ELI5 when transformer includes an embedded pipeline

两盒软妹~` 提交于 2020-07-07 05:53:10
问题 The ELI5 library provides the function transform_feature_names to retrieve the feature names for the output of an sklearn transformer. The documentation says that the function works out of the box when the transformer includes nested Pipelines. I'm trying to get the function to work on a simplified version of the example in the answer to SO 57528350. My simplified example doesn't need Pipeline , but in real life I will need it in order to add steps to categorical_transformer , and I will also

`eli5.show_weights` displayed standard deviation does not agree with the values in `feature_importances_std_`

不问归期 提交于 2020-03-04 18:18:52
问题 The PermutationImportance object has some nice attributes such as feature_importances_ and feature_importances_std_ . To visualize in an HTML style this attributes I used eli5.show_weights function. However, I noticed that the displayed standard deviation does not agree with the values in feature_importances_std_ . More specifically, I can see that the displayed HTML values are equal to feature_importances_std_ * 2 . Why is that ? Code: from sklearn import datasets import eli5 from eli5