Logistic regression vs linear regression. Still have a question? Ask your own! ad by Commonlounge. Both are kinds of models used for classification problems. They operate differently, which I will outline below. Logistic Regression Plot your independent variable X on the X axis For single-class classification, plot the positive data points as P=1.o and negative data points as P=0.0 (Multinomial Logistic Regression also exists -- Multinomial logistic regression ). Tune parameters for an accurate logistic curve for the data using the logit function and cost function for logistic regression Based on X, evaluate P(X) using this logit function. If P>0.5, assume Y=1, otherwise assume Y=0. Support Vector Machine Plot your (x1, x2) data points, and label which class each belongs too Hopefully there is some boundary between the classes of data points, otherwise you might be able to do something like Kernel method to separate the classes more The aim of SVM is to fit a hyperplane based on the data points at the edge of each class, or "support vectors" Using Lagrangian optimization, parameters for a hyperplane (or line in this case) are found which will best separate the two classes. There can also be a margin parameter which will try to maximize the space between the two classes. A test data point will be evaluated against this plane, it's identified class depends on which side of the plane it falls on. Key Differences Logistic Regression fits the data points as if they are along a continuous function. This isn't always the case for single-class classification, and so the function may have trouble classifying where P = 0.5 SVM fits a function (hyperplane) that attempts to separate two classes of data that could be of multiple dimensions. SVM could have difficulty when the classes are not separable or there is not enough margin to fit a (n_dimensions - 1) hyperplane between the two classes.
Комментариев нет:
Отправить комментарий