Logits is an overloaded term which can mean many different things:
In Math, Logit is a function that maps probabilities ([0, 1]
) to R ((-inf, inf)
)
Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to > 0.5.
In ML, it can be
the vector of raw (non-normalized) predictions that a classification
model generates, which is ordinarily then passed to a normalization
function. If the model is solving a multi-class classification
problem, logits typically become an input to the softmax function. The
softmax function then generates a vector of (normalized) probabilities
with one value for each possible class.
Logits also sometimes refer to the element-wise inverse of the sigmoid function.