Note: The title of this post is circular. But I use/abuse it because of the post linked below.
I noticed on the Hacker News front page (and via multiple reshares on twitter), a discussion on why logistic regression uses a sigmoid. The article linked in the story talks about the log-odds ratio, and how it leads to the sigmoid (and gives a good intuitive plug on it).
However, I think that the more important question is – Why do you care about log-odds? Why do you use log-odds and not something else? The point of this quick post is to write out why using the log-odds is infact very well motivated in the first place, and once it is modeled by a linear function, what you get is the logistic function.
Beginning with log-odds would infact be begging the question, so let us try to understand.
____________________________
To motivate and in order to define the loss etc, suppose we had a linear classifier: . This just means that for a vector input, we take the dot product with the parameters
and take the sign.
The learning problem would be to figure out a good direction and a good location of the decision boundary
.
____________________________
We want to figure these out so as to minimize the expected 0-1 loss (or expected number of mistakes) for the classifier . The 0-1 loss for a datapoint-label pair
is simply:
Now, the next question we would like to ask. What is the risk of this classifier that we want to minimize? The risk is the expected loss. That is, if we draw a random sample from the (unknown) distribution , what would be the expected error? More concretely:
Writing out the expectation:
Using the chain rule this becomes:
It is important to understand this expression. This is not assuming anything about the data. However, it is this expression that we want to minimize if we want to get a good classifier. To minimize this expression, it suffices to simply minimize for the conditional risk for any point (i.e. the middle part of the above expression):
But this conditional risk can be written as:
Note that,
Therefore, the conditional risk is simply:
Now, it is this conditional risk that we want to minimize given a point . And in order to do so, looking at the expression above, the classifier must make the following decision:
It is again important to note that so far we have made absolutely no assumptions about the data. So the above classifier is the best classifier that we can have in terms of generalization, in the sense of what might be the expected loss on a new sample point. Such a classifier is called the Bayes Classifier or sometimes called the Plug-in classifier.
But the optimal decision rule mentioned above i.e. is equivalent to saying that:
by taking log, this would be:
If, we were only dealing with binary classification, this would imply:
Notice that by making no assumptions about the data, simply by writing out the conditional risk, the log-odds ratio has fallen out directly. This is not an accident, because the optimal bayes classifier has this form for binary classification. But the question still remains, how do we model this log-odds ratio? The simplest option is to consider a linear model (there is no reason to stick to a linear model, but due to some reasons, one being convexity, we stick to a linear model):
Now, we know that , plugging this in the above, and exponentiating, we have:
Rearranging, yields the familiar logistic model (and the sigmoid):
.
As noted in the post linked in the beginning, the logistic model, , which for any
is,
, and is monotonic
.
____________________________
This derivation shows that the log-odds is not an arbitrary choice, infact a very natural choice. The sigmoid is simply a consequence of modeling the log-odds with a linear function (infact logistic regression is arguably the simplest example of a log-linear model, if we had structured outputs, a natural extension of such a model would be the Conditional Random Field. The choice of using a linear function is simply to make the optimization convex, amongst some other favourable properties).
____________________________
Note: This post was inspired by some very succinct notes by Gregory Shakhnarovich from his Machine Learning class, that I both took and served as a TA for.