I’ve been taught binary logistic regression using the sigmoid function, and multi-class logistic regression using a softmax. However, I have never quite understood how the two are related. In this post, I show exactly how multi-class logistic regression generalizes the binary case.

Table of Contents

Background

Sigmoid

For a scalar real number , the sigmoid function (aka. standard logistic function) is defined as

It outputs values in the range , not inclusive. This property makes it very useful for interpreting a real-valued score as a probability.

The derivative of the sigmoid function is .

Softmax

For a vector , the softmax function is defined as

Each element in is squashed to the range , and the sum of the elements is 1. Thus, the softmax function is useful for converting an arbitrary vector of real numbers into a discrete probability distribution.

Numerical Stability

Let be some scalar. Then,

This property of the softmax function is often exploited to improve numerical stability. If we choose , then . We also have since the exponential function is always positive. Thus, we can always constrain each term of the denominator to the range .

Doing so helps avoid underflow when the are all very small. When the are all small, may underflow to 0, leading to division by 0. When we shift the by , at least one of the terms in the denominator is , thus avoiding the division-by-zero error.

Shifting the by also helps avoid overflow, since the exponential function grows very fast. For example, using 16-bit (half-precision) floating point numbers, is already greater than the maximum representable number and is rounded to +inf instead. When we shift the by , we have , so no single term will overflow. For any practical size of vector , the sum in the denominator would not overflow either.

Cross-Entropy

For two probability distributions and , the cross-entropy function is a measure of how different they are. It is defined as

If and are discrete, then we have .

Some key properties:

When used as a loss function, we set (the labels) and (the predictions). In the classification setting where each example belongs to exactly 1 class, is a one-hot vector with 1 at the index of the true class, and is a vector representing a discrete probability distribution over the possible classes.

Convex Functions

A function is convex if for all in its domain, and with , we have

A function is strictly convex if strict inequality holds whenever and .

For this post, assume the following true statements about convex functions:

Binary Logistic Regression

Data: pairs, where each is a feature vector of length and the label is either 0 or 1

Goal: predict for a given

Model: For an example , we calculate the score as where vector and scalar are parameters to be learned from data. If we just want to predict the binary value of , then we would set a threshold (typically 0) on the score: . While 0 is the most commonly used threshold, we could actually choose any threshold since we could always change the scalar to compensate accordingly.

There are two issues with the model which uses a simple threshold on the score. First, it is difficult to define a differentiable loss function when both and are discrete numbers 0 or 1. Second, we frequently want a probabilistic interpretation for the score. Thus, we introduce the sigmoid function which maps all scores into the range :

For a given , if , then we predict . Otherwise we predict . Note that this setup is identical to setting a 0-threshold on the score.

In the equation above, if we solve for the score , we see that we can interpret the score as the log-odds of (a.k.a. the logits).

Loss function: For a single example with score and label , the logistic loss function is

In the 2nd line of the equation above, it is clear that in the probabilistic interpretation of our model, this loss function is exactly the negative log probability of a single example having true label . Thus, minimizing the sum of the loss over our training examples is equivalent to maximizing the log likelihood. We can see this as follows:

We can learn the model parameters and by performing gradient descent on the loss function with respect to these parameters. The logistic loss function is convex (though not necessarily strictly convex) in the parameters and , so it is guaranteed to converge to a globally optimal value with a small enough learning rate. If we regularize parameters by adding to the loss function for some regularization constant , then the loss function is strictly convex and has a unique global minimum.

Notation: vs.

We’ve assumed that the binary label is . However, it is also common to see . In this case, only the prediction and loss functions change, but the results are identical.

Generalization: Multi-Label Classification

So far we have examined the situation where each training example either belongs to a particular class () or it does not (). However, we can generalize this notion to the case where can belong to many classes simultaneously, with where is the number of classes. Concretely, if inputs are images, and we have 3 classes (“dog”, “car”, “tree”), then could indicate that a particular image contains a car and a tree.

General model: For an example , we calculate the score as where matrix and vector are parameters to be learned from data. The probabilities for each class are given by the sigmoid of each class score: . This is basically a vectorized implementation of separate binary logistic regression models, one for each class.

One downside of training a separate binary logistic regression model for each class is that it assumes the probabilities for each class are independent. For example, suppose we have a dataset of images of objects, and each image is labeled for two binary attributes: “is/isn’t expensive” and “is/isn’t a car”. If all cars are expensive, then the model should be able to learn to predict “is expensive” for every image that “is a car”. However, because we have separate classifiers for each attribute, the classifiers may output “isn’t expensive” for an image that “is a car.”

TensorFlow implementation

In TensorFlow (as of version r1.8), there are two built-in functions for the logistic loss function. TensorFlow assumes that the binary label is .

  1. tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=z) (API documentation)
  2. tf.losses.sigmoid_cross_entropy(multi_class_labels=y, logits=z) (API documentation)

These operations compute exactly the logistic loss function defined above. Consider the common input shapes for z and y:

The difference between the 2 different TensorFlow functions is that the tf.losses variant assumes that the first dimension of y and z is batch_size and performs a reduction operation over the batch of examples after computing their individual losses. By default, this function uses a sum-reduction, but this can be changed via the reduction parameter.

Multinomial Logistic Regression (via Cross-Entropy)

The multi-class setting is similar to the binary case, except the label is now an integer in where is the number of classes. As before, we use a score function. However, now we calculate scores for all classes, instead for just the positive class.

Model: For an example , the class scores are given by vector , where is a matrix and is a length vector of biases. If we just want to predict the class label , then we just choose the class with the highest score: .

As in the binary case, however, we frequently seek a discrete probability distribution over the possible classes. We will abuse notation by letting and be vectors denoting probability distributions. We define the label as a one-hot vector equal to 1 for the correct class and 0 everywhere else. Then we use the softmax function to get our predicted probability distribution from the class scores, where is our model’s estimate for :

Loss function: Now that we are considering two discrete probability distributions and , a natural choice for the loss function is the cross-entropy loss function. The loss for a training example with predicted class distribution and correct class is

As in the binary case, the loss value is exactly the negative log probability of a single example having true class label . Thus, minimizing the sum of the loss over our training examples is equivalent to maximizing the log likelihood.

We can learn the model parameters and by performing gradient descent on the loss function with respect to these parameters. As in the binary logistic regression case, the loss function is convex (but not strictly convex due to over-parameterization, see below), so gradient descent will converge to a global minimum with a small enough step size. With -regularization on both and , the loss function becomes strictly convex.

Over-parameterization

The softmax model is actually over-parameterized, meaning that for model we fit to the data, there are multiple parameter settings that give rise to exactly the same function mapping from inputs to the predictions. If we add a constant vector to all of the coefficient vectors and a constant bias to each , the equations are identical:

Thus, if the loss function is minimized by some setting of the parameters , then it is also minimized by for any vector and any scalar . There is no unique set of weights that minimizes the loss function. Even so, the loss function is still convex (though clearly not strictly convex) so gradient descent will still find a global minimum (source).

For each example , we could always choose and such that the last class has score 0. In other words, we could actually just have weights and biases for just the first classes only.

The binary case

To show that multinomial logistic regression is a generalization of binary logistic regression, we will consider the case where there are 2 classes (ie. ). In this case, we have predictions

Suppose our model has learned and . Taking advantage of the over-parameterization of our model, we know that the scores are equivalent if we subtract some constant vector from the weights and scalar from the biases . Choosing and , we get

where and . We see that the learned scores have the same form as logistic regression.

Likewise, the cross-entropy loss with two classes, where the correct class is , becomes

which is identical to the logistic regression version.

TensorFlow implementation

In TensorFlow (as of version r1.8), there are several built-in functions for the cross-entropy loss.

  1. tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=z). This operation computes exactly the loss function defined above, where z contains the scores and y has the one-hot labels. Both z and y should have shape [batch_size, num_classes]. If they are a batch of examples, then you have to run tf.reduce_mean() or tf.reduce_sum() to get the total loss. (API documentation)
  2. tf.nn.sparse_softmax_cross_entropy_with_logits. This is similar to the previous function, except it takes labels y as the integer for the correct class, without converting it to a one-hot label. (API documentation)
  3. tf.losses.softmax_cross_entropy(onehot_labels=y, logits=z) and tf.losses.sparse_softmax_cross_entropy(labels=y, logits=z). Both of these are similar to tf.nn functions above, but they combine the loss calculation and reduction over a batch of examples. By default, they function apply a sum-reduction, but this can be changed via the reduction parameter. (API documentation)

Sources

Multi-class Logistic Regression: one-vs-all and one-vs-rest

Given a binary classification algorithm (including binary logistic regression, binary SVM classifier, etc.), there are two common approaches to use them for multi-class classification: one-vs-rest (also known as one-vs-all) and one-vs-one. Each has its strengths and weaknesses. There is no clear “best” multi-class classification model; it depends on the dataset.

In one-vs-rest, we train separate binary classification models. Each classifier , for is trained to determine whether or not an example is part of class or not. To predict the class for a new example , we run all classifiers on and choose the class with the highest score: . One main drawback is that when there are lots of classes, each binary classifier sees a highly imbalanced dataset, which may degrade performance.

In one-vs-one, we train separate binary classification models, one for each possible pair of classes. To predict the class for a new example , we run all classifiers on and choose the class with the most “votes.” A major drawback is that there can exist fairly large regions in the decision space with ties for the class with the most number of votes.

Sources

Deep Learning with Logistic Regression

In the settings above, we assumed our dataset consisted of pairs where each is a feature vector. Note that we can easily use a deep neural network (or any other type of transformation) and then perform logistic regression on pairs. In a classification setting, logistic / softmax regression is just a convenient final layer that maps feature vectors to a class label.