Livio/ July 14, 2019/ Python/ 0 comments

Logistic Regression

Logistic regression is a technique which can be applied to traditional statistics as well as machine learning. It is used to predict whether something is true or false and can be used to model binary dependent variables, like win/loss, sick/not stick, pass/fail etc. 

Logistic regression assumes a linear relationship between the predictor variables and the probability of the event being True. This relationship is expressed by the following formula:

where the left side represents the natural logarithm of the odds, and the right size should look very familiar to those already acquainted with linear regression. P represents the probability of the event being True and consequently 1-P is the probability of the event not being True. Because P is between 0 and 1, if we take the limit for P = 0 and for P = 1 we have:

Because the final goal is to predict the probability of an event being True given the predictor variables, we need to find a formula for P. Starting from:

we have:

which can also be written as:

The equation:

is known as logit function and in the case of Logistic Regression gives us the probability, given the predictors, of an event being True. 

If, for sake of simplicity, we make for a moment:

we have:

  where t is the log of the odds

we can see that the above function is defined for any value of t. Moreover, when t = 0, P = 0.5

If we calculate the first derivative:

we see that it positive for any value of t

And the second derivative:

we see that it is equal to zero when:

which is when t = 0. For negative values of t, the second derivative is positive (logit function is concave up), for positive values of t, the second derivative is negative(logit function is concave down). Therefore we can sketch our logit function (which represents the probability of an event being True):

When the log of the odds is negative we have a probability of less than 50% of the event being True, when the log of the odds is positive we have a probability of more than 50% of the event being True.

The Cost Function

Our problem will be an optimization problem, which means finding the values of thetas

which minimize the Cost function, which for logistic regression, is the following:

where y is the observed outcome (True/False, or better 1,0) and p is the probability of the event being True given our parameters. Remember that:

When the observed outcome is 1 (True), (1-y) * ln(1-p) evaluates to 0, therefore only the part -y* ln(p) is considered and vice verse when the outcome is 0 (False). 

If we quickly study the function ln(p) we see that:

and:

and because 0<=p<=1, over this interval the first derivative is always negative and the second derivative is always positive. Therefore we can sketch our function:

If we think about it, when the observed value is  y = 1 (True),  which is when the cost function is equal to -y * ln(p) = -ln(p), we want the ‘cost’ to increase as our prediction moves towards 0 (False, wrong prediction) and to decrease when our prediction moves toward 1 (True, correct prediction).

We can do the same kind of analysis when the observed value is y= 0 (False). Our cost function becomes -(1-y) * ln(1-p) = -ln(1-p). If we study this function, we get:

Therefore we can sketch this function:

If we think about it, when the observed value is  y = 0 (False),  which is when the cost function is equal to -(1-y) * ln(1-p) = -ln(1-p), we want the ‘cost’ to increase as our prediction moves towards 1 (True, wrong prediction) and to decrease when our prediction moves toward 0 (False, correct prediction).

So our goal is to find the thetas which minimize this cost function. To do this, we will use the Truncated Newton Method within Scipy. But first, we need to calculate the gradient vector, the vector of partial derivatives for each of our thetas. I will show how to do this for Theta1, and luckly the process is the same for all the other thetas, so we’re in luck 🙂

I am showing below the whole procedure:

 

So, our gradient vector is:

Building a Logistic Regression Class in Python

In reality, because we will deal with many observation, the Cost function will be the sum for each observation of the Cost function shown above and divided by the number of observations, therefore it becomes:

and the gradient vector is like the one shown above, but with 1/m added:

remember that m is the number of observations, whereas n is the number of attributes. The gradient vector contains n+1 elements because X_0 is a column of 1’s which is multiplied against theta_0, the intercept. This makes matrix calculation easier. 

The matrix of predictors is:

Our thetas vector:

and our observed outcomes:

The first method we will add will be called Sigmoid, which converts the log-odds vector into a probability vector:

This is very easy in Python using Numpy:

 

The second method will be used to calculate the cost function. Remember that the cost function is:

therefore, we first need to calculate the log-odds, then the probabilities and then the natural logarithm. In order to calculate the log-odds, which is:

we can perform a matrix multiplication between the predictors and the thetas:

then convert calculate the cost function:

the above will return a scalar value which will be divided by the number of the observations. Python code:

 

The same things now needs to be done with the creation of the gradient vector, which will be a vector of the values of the partial derivatives.

letting the probability for i-th row equal to:

we can rewrite the gradient computation as:

which becomes:

 

How to calculate R^2 for Logistic Regression

There’re several ways of calculating r-squared for logistic regression. Here I will use the so-called McFadden Pseudo R-squared. The idea is very similar to r-squared of linear regression. In linear regression, r-squared as:

and it is the percentage of variation around the mean that goes away when you fit a linear regression model. It goes from 0 to 1. When fitting a line does not improve the variation around the mean, we get 0 and we’ve a very bad model. As the sum of squared errors goes down, R^2 goes up towards 1.

For logistic regression, we can first calculate the probability of an event being True without taking into account any of the predictors, therefore we only look at the observed outcomes and divide the number of Trues by the total number of observations, we will call it this probability o, and we will call the probabilities calculated by our model p. 

Therefore, o is simply:

and p is:

and R-squared is:

you will notice the numerator is very similar to the cost function, which is also the case for linear regression. 

Putting it all together, our class becomes:

 

 

Now that we’ve our class, let’s use it to create some predictions, below is a simple project which uses the class we’ve just built to make breast cancer predictions:

 

Books I recommend to start your machine learning journey:

 

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*