Where Comes The Name “Softmax”?
by Jiyang Wang (e-mail: jiyang_wang@yahoo。com)
When I was learning multiclass classifiers such as SVM and Neural Networks, “Softmax” came across to my mind with some mystery in its name。 I was wondering why it was named so, and whether there was “Hardmax” function being its brother or even ancestor。 I checked it out on Wikipedia but failed to find any useful information
there
。 There was the same question about Softmax‘s name on Quora and one of the answers, to my memory, revealed some part of the mystery。 However, I cannot find that Quora link anymore。
Like most people, I continue to use it naturally in my work without thinking about its origin。 Engineers and researchers are mostly pragmatic, aren’t they?
But recently when I was preparing for interviews and Softmax drew my attention again。 I decided to figure out why it‘s got this funny name and whether it really stems from another related function nick-named “Hardmax” (or not, no chance to find it on Wikipedia, anyway)。
Hardmax
Before elaborating Softmax, I just jump to the conclusion that there
is
“Hardmax” function, which is usually called
Hinge Loss
function used in linear classifiers such as SVM:
where
and
are classification scores of the
j-th
and
i-th
element of the output vector of the model。 And
is the loss for classifying the input
as the
i-th
class。
Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition has a very good explanation of the above loss function。 Please check it out
here
。
source: Stanford CS class CS231n
The Multiclass Support Vector Machine “wants” the score of the correct class to be higher than all other scores by at least a margin of delta。 If any class has a score inside the red region (or higher), then there will be accumulated loss。 Otherwise the loss will be zero。 Our objective will be to find the weights that will simultaneously satisfy this constraint for all examples in the training data and give a total loss that is as low as possible。
And here is an example from it:
source: Stanford CS class CS231n
Wikipedia has
Hinge Loss
as well。
Basically, hinge loss has a threshold
below which the loss of an element of output vector (score vector) is perceived as zero。 The threshold
, which functions as a margin between the classification boundary (a。k。a。 decision boundary) and the samples nearest to the boundary, is applied to
for all
so that the loss of
is added up to the overall loss of
only when
has a difference from
smaller than the threshold。
Thus the hinge loss function has the form of max function
and it is “hard” by its nature。 We’ll see this later on when we draw the graph of max function。 Now here is an example of how hinge loss is calculated (from Stanford CS231n), in which
, i。e。, the ground truth label of the input picture is “cat”, and
:
Then we want to see how max function looks like if we draw a graph of it。 I simplify the graph by using only integers for
while fixing
to 0 and
。
import
random
import
numpy
as
np
from
matplotlib
import
pyplot
def
max_x
(
x
,
delta
=
0。
):
“”“ Returns the list of positive real numbers x whose negative items are turned into 0。 ”“”
x
=
np
。
array
(
x
)
negative_idx
=
x
<
delta
x
[
negative_idx
]
=
0。
return
x
x
=
np
。
array
(
range
(
-
10
,
10
))
s_j
=
np
。
array
(
x
)
hinge_loss
=
max_x
(
s_j
,
delta
=
1。
)
pyplot
。
plot
(
s_j
,
hinge_loss
)
pyplot
。
show
()
Figure 2 - “Hardmax&;amp;amp;amp;amp;quot; function
No doubt why
function is also called “hinge” function, because its shape looks like a hinge。 It can be called “hardmax” because the loss introduced by
to
is zeroed out as long as its negative difference from
is larger than a threshold regardless of its own value。 Or, from another perspective, there is a point (at
in the graph) where the
function is not differentiable (which is ‘hard’ as compared to ‘soft’)。
Softmax
Interpretation of Scores
Let‘s put hinge/hardmax function aside for a while and talk about Softmax function。 Again, Stanford CS231n provides very clear description of why Softmax function is applied to classification scores, quoted below:
Unlike the SVM which treats the outputs
as (uncalibrated and possibly difficult to interpret) scores for each class, the Softmax classifier gives a slightly more intuitive output (normalized class probabilities) and also has a probabilistic interpretation that we will describe shortly。 In the Softmax classifier, the function mapping
stays unchanged, but we now interpret these scores as the
unnormalized log probabilities
for each class and replace the hinge loss with a
cross-entropy loss
that has the form:
The function
where
is called
Softmax function
: It takes a vector of arbitrary real-valued scores (in
) and squashes it to a vector of values between zero and one that sum to one。
Cross-Entropy Loss
Attention should be paid to the final form of
just above, in which it is not obvious where cross-entropy loss is applied。 Let’s delve into more details。
As said above, score
is interpreted as unnormalized log probability for class
, so we have:
Or
Now we normalize this probability:
This is Softmax function。 Then we calculate cross-entropy, quote from Stanford CS231n:
The
cross-entropy
between a “true” distribution p and an estimated distribution q is defined as:
The Softmax classifier is hence minimizing the cross-entropy between the estimated class probabilities (
as seen above) and the “true” distribution, which in this interpretation is the distribution where all probability mass is on the correct class (i。e。
contains a single 1 at the i-th position)。
In a nutshell, cross-entropy measures the difference between two vectors。 In our case, we want to compare the ground-truth label that has been one-hot coded to
with the output vector of the model
。 As vector
has 1 at the
i-th
position and 0‘s at all the other positions, the result of cross-entropy between vector
and vector
keeps only the
i-th
element of
vector。
This is exactly
。 It is the negative log of Softmax function。
Softmax Function
Let’s re-write entropy-loss over Softmax function as below so that it makes it clear that the loss function is in fact a function of score difference:
K is the number of classes。
Unlike in hinge loss, every element
of the output score vector in cross-entropy loss over Softmax function has some influence on the final loss regardless of its score value
。 So we take one of the element‘s score
as variate with
fixed:
Now let’s draw the graph of
。 As we are only interested in the shape of
, we can fix
to 0 as we did when drawing the graph of
function。
For comparison, I draw both the
function together with
function。
import
random
import
numpy
as
np
from
matplotlib
import
pyplot
def
cross_entropy
(
s_k
,
s_j
):
soft_max
=
1
/
(
1
+
np
。
exp
(
s_k
-
s_j
))
cross_entropy_loss
=
-
np
。
log
(
soft_max
)
return
cross_entropy_loss
s_i
=
0
s_k
=
np
。
array
(
range
(
-
10
,
10
))
soft_x
=
cross_entropy
(
s_k
,
s_i
)
pyplot
。
plot
(
x
,
hinge_loss
)
pyplot
。
plot
(
range
(
-
10
,
10
),
soft_x
)
pyplot
。
show
()
Figure 3 - Softmax function
Can you tell why
is a ‘soft’ version of
function? I‘m sure you can now。