hyperparameters

Is it reasonable for l1/l2 regularization to cause all feature weights to be zero in vowpal wabbit?

江枫思渺然 提交于 2019-12-04 17:17:38
I got a weird result from vw , which uses online learning scheme for logistic regression. And when I add --l1 or --l2 regularization then I got all predictions at 0.5 (that means all features are 0) Here's my command: vw -d training_data.txt --loss_function logistic -f model_l1 --invert_hash model_readable_l1 --l1 0.05 --link logistic ...and here's learning process info: using l1 regularization = 0.05 final_regressor = model_l1 Num weight bits = 18 learning rate = 0.5 initial_t = 0 power_t = 0.5 using no cache Reading datafile = training_data.txt num sources = 1 average since example example

GridSearchCV/RandomizedSearchCV with LSTM

邮差的信 提交于 2019-12-04 05:37:44
问题 I am stuck on the trying to tune hyperparameters for LSTM via RandomizedSearchCV. My code is below: X_train = X_train.reshape((X_train.shape[0], 1, X_train.shape[1])) X_test = X_test.reshape((X_test.shape[0], 1, X_test.shape[1])) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) from imblearn.pipeline import Pipeline from keras.initializers import RandomNormal def create_model(activation_1='relu', activation_2='relu', neurons_input = 1, neurons_hidden_1=1, optimizer='Adam' ,

Optimize the Kernel parameters of RBF kernel for GPR in scikit-learn using internally supported optimizers

微笑、不失礼 提交于 2019-12-03 21:41:20
The basic equation of square exponential or RBF kernel is as follows: Here l is the length scale and sigma is the variance parameter. The length scale controls how two points appear to be similar as it simply magnifies the distance between x and x'. The variance parameter controls how smooth the function is. I want to optimize/train these parameters (l and sigma) with my training data sets. My training data sets are in the following form: X : 2-D Cartesian coordinate as input data y : radio signal strength (RSS) of Wi-Fi device at the 2-D coordinates points as observed output According to

Pyspark - Get all parameters of models created with ParamGridBuilder

走远了吗. 提交于 2019-12-03 20:19:28
问题 I'm using PySpark 2.0 for a Kaggle competition. I'd like to know the behavior of a model ( RandomForest ) depending on different parameters. ParamGridBuilder() allows to specify different values for a single parameters, and then perform (I guess) a Cartesian product of the entire set of parameters. Assuming my DataFrame is already defined: rdc = RandomForestClassifier() pipeline = Pipeline(stages=STAGES + [rdc]) paramGrid = ParamGridBuilder().addGrid(rdc.maxDepth, [3, 10, 20]) .addGrid(rdc

Grid Search the number of hidden layers with keras

被刻印的时光 ゝ 提交于 2019-12-03 18:56:51
I am trying to optimize the hyperparameters of my NN using Keras and sklearn. I am wrapping up with KerasClassifier (it´s a classification problem). I am trying to optimize the number of hidden layers. I can´t figure it out how to do it with keras (actually I am wondering how to set up the function create_model in order to maximize the number of hidden layers) Could anyone please help me? My code (just the important part): ## Import `Sequential` from `keras.models` from keras.models import Sequential # Import `Dense` from `keras.layers` from keras.layers import Dense def create_model(optimizer

How to pass elegantly Sklearn's GridseachCV's best parameters to another model?

只谈情不闲聊 提交于 2019-12-03 16:49:22
问题 I have found a set of best hyperparameters for my KNN estimator with Grid Search CV: >>> knn_gridsearch_model.best_params_ {'algorithm': 'auto', 'metric': 'manhattan', 'n_neighbors': 3} So far, so good. I want to train my final estimator with these new-found parameters. Is there a way to feed the above hyperparameter dict to it directly? I tried this: >>> new_knn_model = KNeighborsClassifier(knn_gridsearch_model.best_params_) but instead the hoped result new_knn_model just got the whole dict

how use grid search with fit generator in keras

瘦欲@ 提交于 2019-12-03 11:17:21
问题 i want to grid search the parameter of the model with fit_generator as input in keras i find below code in stack overflow and change it 1- but i don't understand how give the fit_generator or flow_from_directory to fit function(last line in the code) 2- how can add early stop? thanks from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D

How to pass elegantly Sklearn's GridseachCV's best parameters to another model?

感情迁移 提交于 2019-12-03 05:52:49
I have found a set of best hyperparameters for my KNN estimator with Grid Search CV: >>> knn_gridsearch_model.best_params_ {'algorithm': 'auto', 'metric': 'manhattan', 'n_neighbors': 3} So far, so good. I want to train my final estimator with these new-found parameters. Is there a way to feed the above hyperparameter dict to it directly? I tried this: >>> new_knn_model = KNeighborsClassifier(knn_gridsearch_model.best_params_) but instead the hoped result new_knn_model just got the whole dict as the first parameter of the model and left the remaining ones as default: >>> knn_model

how use grid search with fit generator in keras

百般思念 提交于 2019-12-03 05:09:23
i want to grid search the parameter of the model with fit_generator as input in keras i find below code in stack overflow and change it 1- but i don't understand how give the fit_generator or flow_from_directory to fit function(last line in the code) 2- how can add early stop? thanks from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.wrappers.scikit_learn import KerasClassifier from keras import backend as K

Hyperparameter Tuning of Tensorflow Model

有些话、适合烂在心里 提交于 2019-12-03 03:54:29
问题 I've used Scikit-learn's GridSearchCV before to optimize the hyperparameters of my models, but just wondering if a similar tool exists to optimize hyperparameters for Tensorflow (for instance number of epochs, learning rate, sliding window size etc. ) And if not, how can I implement a snippet that effectively runs all different combinations? 回答1: Another viable (and documented) option for grid search with Tensorflow is Ray Tune. It's a scalable framework for hyperparameter tuning,