regularized

Python pandas time series interpolation and regularization

陌路散爱 提交于 2019-12-02 22:56:03
I am using Python Pandas for the first time. I have 5-min lag traffic data in csv format: ... 2015-01-04 08:29:05,271238 2015-01-04 08:34:05,329285 2015-01-04 08:39:05,-1 2015-01-04 08:44:05,260260 2015-01-04 08:49:05,263711 ... There are several issues: for some timestamps there's missing data (-1) missing entries (also 2/3 consecutive hours) the frequency of the observations is not exactly 5 minutes, but actually loses some seconds once in a while I would like to obtain a regular time series, so with entries every (exactly) 5 minutes (and no missing valus). I have successfully interpolated

Is the Keras implementation of dropout correct?

僤鯓⒐⒋嵵緔 提交于 2019-12-01 03:29:36
The Keras implementation of dropout references this paper . The following excerpt is from that paper: The idea is to use a single neural net at test time without dropout. The weights of this network are scaled-down versions of the trained weights. If a unit is retained with probability p during training, the outgoing weights of that unit are multiplied by p at test time as shown in Figure 2. The Keras documentation mentions that dropout is only used at train time, and the following line from the Dropout implementation x = K.in_train_phase(K.dropout(x, level=self.p), x) seems to indicate that

Improving a badly conditioned matrix

风流意气都作罢 提交于 2019-11-30 03:51:27
I have a badly conditioned matrix, whose rcond() is close to zero, and therefore, the inverse of that matrix does not come out to be correct. I have tried using pinv() but that does not solve the problem. This is how I am taking the inverse: X = (A)\(b); I looked up for a solution to this problem and found this link (last solution) for improving the matrix. The solution there suggests to use this: A_new = A_old + c*eye(size(A_old)); Where c > 0 . So far employing this technique works in making the matrix A better conditioned and the resultant solution looks better. However, I investigated

TensorFlow - regularization with L2 loss, how to apply to all weights, not just last one?

社会主义新天地 提交于 2019-11-29 18:56:36
I am playing with a ANN which is part of Udacity DeepLearning course. I have an assignment which involves introducing generalization to the network with one hidden ReLU layer using L2 loss. I wonder how to properly introduce it so that ALL weights are penalized, not only weights of the output layer. Code for network without generalization is at the bottom of the post (code to actually run the training is out of the scope of the question). Obvious way of introducing the L2 is to replace the loss calculation with something like this (if beta is 0.01): loss = tf.reduce_mean( tf.nn.softmax_cross