Softmax Classificationn
While binary classification deals with two possible outcomes, Softmax regression is the go-to method for classifying instances into multiple categories, such as identifying animal types in images.
Working through Softmax
In a scenario where we have classes like Dog (), Cat (), Baby Chick (), and None (). Here, is the number of classes (4 in this example above), the range of classes is: and for nodes in the output layer, . We first convert the class labels into a vector representation using one-hot encoding.
One Hot Encoding
We use one-hot encoding to represent our classes. This means for each class, we have a vector where one element is indicating the class, and the rest are s. It is the way we usually represent class labels in a vector. With our vector with numbers of range , where is the number of classes, we perform one hot encoding to get a matrix with dimensions . Say we have:
We convert this into:
This is called "one hot" encoding because in the converted representation, exactly one element of each column is hot (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In TensorFlow, you can just use tf.one_hot(labels, depth, axis=0)
. axis=0
indicates the new axis is created at dimension 0
.
Softmax Calculation
Given an input vector from the last layer , the Softmax function calculates the probabilities as follows:
Where is:
The denominator sums , the exponential function applied to each component of the input vector, ensuring that the softmax output is a probability distribution that sums to 1. For example, given
We compute as follows:
Then we the softmax function as follows:
The output of Softmax gives us a vector where each element is a probability of the input belonging to one of the classes i.e. . This result shown gives a a chance od being in class 0 for example. That is the highest probability, i.e. the "soft max".
Each of the values in the output layer will contain a probability of the example to belong to each of the classes.
Training the Classifier
The Softmax classifier uses a cross-entropy loss function, which aims to maximize the probability of the correct class. If the classifier is confident about the correct class, the loss is low. However, if it's unsure or wrong, the loss goes up.
Contrasted to softmax, there's an activation which is called hard max, which gets 1 for the maximum value and zeros for the others. If you are using NumPy, it's np.max
over the vertical axis. It is a form of maximum likelihood estimation.
The Softmax name came from softening the values and not harding them like hard max i.e a more gentle maxing. Softmax is a generalization of logistic activation function to classes. If , softmax reduces to logistic regression. The loss function used with softmax:
Here is an example. Say that we have a cat:
And our softmax classifier outputs:
We can compute the loss as follows:
The loss function will first multiply out the incorrect classes in , and we are left with . This means the loss function tries to make sure that the corresponding probability of that class is as high as possible (here ). The cost function used with Softmax:
Also, in terms of back propagation with softmax: