hyperparameters

Hyperparameter tune for Tensorflow

对着背影说爱祢 提交于 2019-12-29 04:02:35
问题 I am searching for a hyperparameter tune package for code written directly in Tensorflow (not Keras or Tflearn). Could you make some suggestion? 回答1: Usually you don't need to have your hyperparameter optimisation logic coupled with the optimised model (unless your hyperparemeter optimisation logic is specific to the kind of model that you are training, in which case you would need to tell us a bit more). There are several tools and packages available for the task. Here is a good paper on the

R-MLR : get tuned hyperparameters for a wrapped learner

走远了吗. 提交于 2019-12-24 06:17:13
问题 I'm building an xgboost classification task in R using the mlr package : # define task Task <- mlr::makeClassifTask(id = "classif.xgboost", data = df, target = "response", weights = NULL, positive = "yes", check.data = TRUE, blocking = folds) # make a base learner lrnBase <- makeLearner(cl = "classif.xgboost", predict.type = "prob", # "response" (= labels) or "prob" (= labels and probabilities) predict.threshold = NULL ) I have to undersample one of my classes : lrnUnder <-

can't pickle _thread.RLock objects when running tune of ray packge for python (hyper parameter tuning)

不羁的心 提交于 2019-12-24 00:36:29
问题 I am trying to do a hyper parameter tuning with the tune package of Ray. Shown below is my code: # Disable linter warnings to maintain consistency with tutorial. # pylint: disable=invalid-name # pylint: disable=g-bad-import-order from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf import matplotlib as mplt mplt.use('agg') # Must be before importing matplotlib.pyplot or pylab! import matplotlib.pyplot as plt

specify scoring metric in GridSearch function with hypopt package in python

岁酱吖の 提交于 2019-12-22 19:32:02
问题 I'm using Gridsearch function from hypopt package to do my hyperparameter searching using specified validation set. The default metric for classification seems to be accuracy (not very sure). Here I want to use F1 score as the metric. I do not know where I should specify the metric. I looked at the documentation but kind of confused. Does anyone who are familiar with hypopt package know how I can do this? Thanks a lot in advance. from hypopt import GridSearch log_reg_params = {"penalty": ['l1

specify scoring metric in GridSearch function with hypopt package in python

非 Y 不嫁゛ 提交于 2019-12-22 19:31:32
问题 I'm using Gridsearch function from hypopt package to do my hyperparameter searching using specified validation set. The default metric for classification seems to be accuracy (not very sure). Here I want to use F1 score as the metric. I do not know where I should specify the metric. I looked at the documentation but kind of confused. Does anyone who are familiar with hypopt package know how I can do this? Thanks a lot in advance. from hypopt import GridSearch log_reg_params = {"penalty": ['l1

Multidimensional hyperparameter search with vw-hypersearch in Vowpal Wabbit

戏子无情 提交于 2019-12-22 10:35:21
问题 vw-hypersearch is the Vowpal Wabbit wrapper intended to optimize hyperparameters in vw models: regularization rates, learning rates and decays, minibatches, bootstrap sizes etc. In the tutorial for vw-hypersearch there is a following example: vw-hypersearch 1e-10 5e-4 vw --l1 % train.dat Here % means the parameter to be optimized, 1e-10 5e-4 are the lower and upper bounds for the interval over which to search. The library uses golden section search method to minimize the number of iterations.

Tensorflow Object-Detection API - Hyperparameter Tuning & Grid Search

好久不见. 提交于 2019-12-13 02:48:34
问题 I am currently working with the Tensorflow Object-Detection API and I want to fine-tune a pre-trained model. Therefore, a hyperparameter-tuning is required. Does the API already provide some kind of hyperparameter-tuning (like a grid search)? If there is nothing available, how can I implement a simple grid search to tune (the most relevant) hyperparameters? Furthermore, does the API provide some kind of Early Stopping function that automatically aborts the training process if the accuracy

ap_uniform_sampler() missing 1 required positional argument: 'high' in Ray Tune package for python

萝らか妹 提交于 2019-12-12 13:07:11
问题 I am trying to use the Ray Tune package for hyperparameter tuning of a LSTM implemented using pure Tensorflow. I used the hyperband scheduler and HyperOptSearch algorithms for this and I am also using the trainable class method. When I try to run it I get the following error: TypeError: ap_uniform_sampler() missing 1 required positional argument: 'high' shown below is the stack trace: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In

How to random search in a specified grid in caret package?

空扰寡人 提交于 2019-12-12 01:26:20
问题 I wonder it is possible to use random search in a predefined grid. For example, my grid has alpha and lambda for glmnet method. alpha is between 0 and 1, and lambda is between -10 to 10. I want to use random search 5 times to randomly try points in this bound. I wrote the following code for grid search and it works fine, but I cannot modify it for random search in a bound. rand_ctrl <- trainControl(method = "repeatedcv", repeats = 5, search = "random") grid <- expand.grid(alpha=seq(0,1,0.1)

GridSearch for doc2vec model built using gensim

时间秒杀一切 提交于 2019-12-11 08:44:02
问题 I am trying to find best hyperparameters for my trained doc2vec gensim model which takes a document as an input and create its document embeddings. My train data consists of text documents but it doesn't have any labels. i.e. I just have 'X' but not 'y'. I found some questions here related to what I am trying to do but all of the solutions are proposed for supervised models but none for unsupervised like mine. Here is the code where I am training my doc2vec model: def train_doc2vec( self, X: