Higher loss penalty for true non-zero predictions
问题 I am building a deep regression network (CNN) to predict a (1000,1) target vector from images (7,11). The target usually consists of about 90 % zeros and only 10 % non-zero values. The distribution of (non-) zero values in the targets vary from sample to sample (i.e. there is no global class imbalance). Using mean sqaured error loss, this led to the network predicting only zeros, which I don't find surprising. My best guess is to write a custom loss function that penalizes errors regarding