Converting LinearSVC's decision function to probabilities (Scikit learn python )

Question:

I use linear SVM from scikit learn (LinearSVC) for binary classification problem. I understand that LinearSVC can give me the predicted labels, and the decision scores but I wanted probability estimates (confidence in the label). I want to continue using LinearSVC because of speed (as compared to sklearn.svm.SVC with linear kernel) Is it reasonable to use a logistic function to convert the decision scores to probabilities?

import sklearn.svm as suppmach
# Fit model:
svmmodel=suppmach.LinearSVC(penalty='l1',C=1)
predicted_test= svmmodel.predict(x_test)
predicted_test_scores= svmmodel.decision_function(x_test) 

I want to check if it makes sense to obtain Probability estimates simply as [1 / (1 + exp(-x)) ] where x is the decision score.

Alternately, are there other options wrt classifiers that I can use to do this efficiently?

Thanks.

Asked By: chet

||

Answers:

I took a look at the apis in sklearn.svm.* family. All below models, e.g.,

  • sklearn.svm.SVC
  • sklearn.svm.NuSVC
  • sklearn.svm.SVR
  • sklearn.svm.NuSVR

have a common interface that supplies a

probability: boolean, optional (default=False) 

parameter to the model. If this parameter is set to True, libsvm will train a probability transformation model on top of the SVM’s outputs based on idea of Platt Scaling. The form of transformation is similar to a logistic function as you pointed out, however two specific constants A and B are learned in a post-processing step. Also see this stackoverflow post for more details.

enter image description here

I actually don’t know why this post-processing is not available for LinearSVC. Otherwise, you would just call predict_proba(X) to get the probability estimate.

Of course, if you just apply a naive logistic transform, it will not perform as well as a calibrated approach like Platt Scaling. If you can understand the underline algorithm of platt scaling, probably you can write your own or contribute to the scikit-learn svm family. 🙂 Also feel free to use the above four SVM variations that support predict_proba.

Answered By: greeness

If you want speed, then just replace the SVM with sklearn.linear_model.LogisticRegression. That uses the exact same training algorithm as LinearSVC, but with log-loss instead of hinge loss.

Using [1 / (1 + exp(-x))] will produce probabilities, in a formal sense (numbers between zero and one), but they won’t adhere to any justifiable probability model.

Answered By: Fred Foo

scikit-learn provides CalibratedClassifierCV which can be used to solve this problem: it allows to add probability output to LinearSVC or any other classifier which implements decision_function method:

 svm = LinearSVC()
 clf = CalibratedClassifierCV(svm) 
 clf.fit(X_train, y_train)
 y_proba = clf.predict_proba(X_test)

User guide has a nice section on that. By default CalibratedClassifierCV+LinearSVC will get you Platt scaling, but it also provides other options (isotonic regression method), and it is not limited to SVM classifiers.

Answered By: Mikhail Korobov

If what your really want is a measure of confidence rather than actual probabilities, you can use the method LinearSVC.decision_function(). See the documentation.

Answered By: Syncrossus

Just as an extension for binary classification with SVMs: You could also take a look at SGDClassifier which performs a gradient Descent with a SVM by default. For estimation of the binary-probabilities it uses the modified huber loss by

(clip(decision_function(X), -1, 1) + 1) / 2)

An example would look like:

from sklearn.linear_model import SGDClassifier
svm = SGDClassifier(loss="modified_huber") 
svm.fit(X_train, y_train)
proba = svm.predict_proba(X_test)
Answered By: 3r1c