loss-function

Using SGD without using sklearn (LogLoss increasing with every epoch)

荒凉一梦 提交于 2020-08-08 05:16:08
问题 def train(X_train,y_train,X_test,y_test,epochs,alpha,eta0): w,b = initialize_weights(X_train[0]) loss_test=[] N=len(X_train) for i in range(0,epochs): print(i) for j in range(N-1): grad_dw=gradient_dw(X_train[j],y_train[j],w,b,alpha,N) grad_db=gradient_db(X_train[j],y_train[j],w,b) w=np.array(w)+(alpha*(np.array(grad_dw))) b=b+(alpha*(grad_db)) predict2 = [] for m in range(len(y_test)): z=np.dot(w[0],X_test[m])+b if sigmoid(z) == 0: # sigmoid(w,x,b) returns 1/(1+exp(-(dot(x,w)+b))) predict2

Use TensorFlow loss Global Objectives (recall_at_precision_loss) with Keras (not metrics)

☆樱花仙子☆ 提交于 2020-07-21 03:31:05
问题 Background I have a multi-label classification problem with 5 labels (e.g. [1 0 1 1 0] ). Therefore, I want my model to improve at metrics such as fixed recall, precision-recall AUC or ROC AUC. It doesn't make sense to use a loss function (e.g. binary_crossentropy ) that is not directly related to the performance measurement I want to optimize. Therefore, I want to use TensorFlow's global_objectives.recall_at_precision_loss() or similar as loss function. Relevant GitHub: https://github.com

Channel wise CrossEntropyLoss for image segmentation in pytorch

倾然丶 夕夏残阳落幕 提交于 2020-07-05 12:11:32
问题 I am doing an image segmentation task. There are 7 classes in total so the final outout is a tensor like [batch, 7, height, width] which is a softmax output. Now intuitively I wanted to use CrossEntropy loss but the pytorch implementation doesn't work on channel wise one-hot encoded vector So I was planning to make a function on my own. With a help from some stackoverflow, My code so far looks like this from torch.autograd import Variable import torch import torch.nn.functional as F def cross