пятница, 16 августа 2019 г.

Logistic regression interpretation of coefficients

An Introduction to Logistic Regression. Nuts and Bolts. There are many important research topics for which the dependent variable is "limited" (discrete not continuous). Researchers often want to analyze whether some event occurred or not, such as voting, participation in a public program, business success or failure, morbidity, mortality, a hurricane and etc. Binary logistic regression is a type of regression analysis where the dependent variable is a dummy variable (coded 0, 1). A data set appropriate for logistic regression might look like this: "Why shouldn't I just use ordinary least squares?" Good question. Consider the linear probability (LP) model: where Y is a dummy dependent variable, =1 if event happens, =0 if event doesn't happen, a is the coefficient on the constant term, B is the coefficient(s) on the independent variable(s), X is the independent variable(s), and e is the error term. Use of the LP model generally gives you the correct answers in terms of the sign and significance level of the coefficients. The predicted probabilities from the model are usually where we run into trouble. There are 3 problems with using the LP model: The error terms are heteroskedastic ( heteroskedasticity occurs when the variance of the dependent variable is different with different values of the independent variables ): var(e)= p(1-p), where p is the probability that EVENT=1. Since P depends on X the "classical regression assumption" that the error term does not depend on the Xs is violated. e is not normally distributed because P takes on only two values, violating another "classical regression assumption" The predicted probabilities can be greater than 1 or less than 0 which can be a problem if the predicted values are used in a subsequent analysis. Some people try to solve this problem by setting probabilities that are greater than (less than) 1 (0) to be equal to 1 (0). This amounts to an interpretation that a high probability of the Event (Nonevent) occuring is considered a sure thing. The "logit" model solves these problems: where: ln is the natural logarithm, log exp , where exp=2.71828 p is the probability that the event Y occurs, p(Y=1) p/(1-p) is the "odds ratio" ln[p/(1-p)] is the log odds ratio, or "logit" all other components of the model are the same. The logistic regression model is simply a non-linear transformation of the linear regression. The "logistic" distribution is an S-shaped distribution function which is similar to the standard-normal distribution (which results in a probit regression model) but easier to work with in most applications (the probabilities are easier to calculate). The logit distribution constrains the estimated probabilities to lie between 0 and 1. For instance, the estimated probability is: p = 1/[1 + exp(- a - B X)] With this functional form: if you let a + B X =0, then p = .50 as a + B X gets really big, p approaches 1 as a + B X gets really small, p approaches 0. A graphical comparison of the linear probability and logistic regression models is illustrated here. The estimated coefficients must be interpreted with care. Instead of the slope coefficients ( B ) being the rate of change in Y (the dependent variables) as X changes (as in the LP model or OLS regression), now the slope coefficient is interpreted as the rate of change in the "log odds" as X changes. This explanation is not very intuitive. It is possible to compute the more intuitive "marginal effect" of a continuous independent variable on the probability. The marginal effect is. An interpretation of the logit coefficient which is usually more intuitive (especially for dummy independent variables) is the "odds ratio"-- exp B is the effect of the independent variable on the "odds ratio" [the odds ratio is the probability of the event divided by the probability of the nonevent]. For example, if exp B 3 =2, then a one unit change in X 3 would make the event twice as likely (.67/.33) to occur. Odds ratios equal to 1 mean that there is a 50/50 chance that the event will occur with a small change in the independent variable. Negative coefficients lead to odds ratios less than one: if exp B 2 =.67, then a one unit change in X 2 leads to the event being less likely (.40/.60) to occur. Note that odds ratios for continuous independent variables tend to be close to one, this does NOT suggest that the coefficients are insignificant. Use the Wald statistic (see below) to test for statistical significance. [For those of you who just NEED to know . ] Maximum likelihood estimation (MLE) is a statistical method for estimating the coefficients of a model. MLE is usually used as an alternative to non-linear least squares for nonlinear equations. The likelihood function (L) measures the probability of observing the particular set of dependent variable values (p 1 , p 2 , . p n ) that occur in the sample. It is written as the probability of the product of the dependent variables: The higher the likelihood function, the higher the probability of observing the ps in the sample. MLE involves finding the coeffients ( a , B ) that makes the log of the likelihood function (LL X i = 0, summed over all observations. or something like that . >


Testing the hypothesis that a coefficient on an independent variable is significantly different from zero is similar to OLS models. The Wald statisitic for the B coefficient is: which is distributed chi-square with 1 degree of freedom. The Wald is simply the square of the (asymptotic) t-statistic. The probability of a YES response from the data above was estimated with the logistic regression procedure in SPSS (click on "statistics," "regression," and "logistic"). The SPSS results look like this:

Комментариев нет:

Отправить комментарий