问题
I am trying to get the variable importance for all predictors (or variables, or features) of a tuned support vector machine (svm) model using e1071::svm
through the mlr
-package in R
. But I am not sure, if I am doing the assessment right. Well, at first the idea:
To get an honest tuned svm-model, I am following the nested-resampling tutorial using spatial n-fold cross-validation (SpRepCV
) in the outer loop and spatial cross-validation (SpCV
) in the inner loop. As tuning parameter gamma
and cost
are tuned in a random grid search.
As variable importance assessment for all the predictors, I would like to use the permutation.importance
, which is, relating to the description, basically the aggregated difference between feature permuted and unpermuted predictions.
In mlr
, there are some filter-functions to get variable importance, but on the same time a subset is created before model-fitting based on a user specific selection-input (threshold or number of variables). - However, I would like to retrieve variable importance of all variable of every fitted model. (I know that learner as random forest
have an important assessment "inclusive")
Right now, I am using mlr::generateFeatureImportanceData in the extract
-argument in the resampling, which looks really awkward. So I am asking me, if there is no easier way?
Here an example using the mlr
-development version:
## initialize libraries
# devtools::install_github("mlr-org/mlr) # using developper version of mlr
if(!require("pacman")) install.packages("pacman")
pacman::p_load("mlr", "ParamHelpers", "e1071", "parallelMap")
## create tuning setting
svm.ps <- ParamHelpers::makeParamSet(
ParamHelpers::makeNumericParam("cost", lower = -12,
upper = 15, trafo = function(x) 2^x),
ParamHelpers::makeNumericParam("gamma", lower = -15,
upper = 6, trafo = function(x) 2^x)
)
## create random search grid, small iteration number for example
ctrl.tune <- mlr::makeTuneControlRandom(maxit = 8)
# inner resampling loop, "
inner <- mlr::makeResampleDesc("SpCV", iters = 3, predict = "both")
# outer loop, "
outer <- mlr::makeResampleDesc("SpRepCV", folds = 5, reps = 2, predict = "both")
## create learner - Support Vector Machine of the e1071-package
lrn.svm <- mlr::makeLearner("classif.svm", predict.type = "prob")
# ... tuning in inner resampling
lrn.svm.tune <- mlr::makeTuneWrapper(learner = lrn.svm, resampling = inner,
measures = list(auc),
par.set = svm.ps, control = ctrl.tune,
show.info = FALSE)
## create function that calculate variable importance based on permutation
extractVarImpFunction <- function(x)
{
list(mlr::generateFeatureImportanceData(task = mlr::makeClassifTask(
id = x$task.desc$id,
data = mlr::getTaskData(mlr::spatial.task, subset = x$subset),
target = x$task.desc$target,
positive = x$task.desc$positive,
coordinates = mlr::spatial.task$coordinates[x$subset,]),
method = "permutation.importance",
learner = mlr::makeLearner(cl = "classif.svm",
predict.type = "prob",
cost = x$learner.model$opt.result$x$cost,
gamma = x$learner.model$opt.result$x$gamma),
measure = list(mlr::auc), nmc = 10
)$res
)
}
## start resampling for getting variable importance of tuned models (outer)
# parallelize tuning
parallelMap::parallelStart(mode = "multicore", level = "mlr.tuneParams", cpus = 8)
res.VarImpTuned <- mlr::resample(learner = lrn.svm.tune, task = mlr::spatial.task,
extract = extractVarImpFunction,
resampling = outer, measures = list(auc),
models = TRUE, show.info = TRUE)
parallelMap::parallelStop() # stop parallelization
## get mean auroc decrease
var.imp <- do.call(rbind, lapply(res.VarImpTuned$extract, FUN = function(x){x[[1]]}))
var.imp <- data.frame(AUC_DECR = colMeans(var.imp), Variable = names(colMeans(var.imp)))
来源:https://stackoverflow.com/questions/48836469/r-mlr-is-there-a-easy-way-to-get-the-variable-importance-of-tuned-support-vec