SVC MultiClass Classification OVO decision function explanation
Question:
I am trying to understand how decision_function value is to be interpreted in multi-class classification scenario using One Vs One approach. I have created a 2D sample data with 100 samples for 4 classes each, which is stored in X & y variables. Here is the code:
# 4 Classes - Make 4 separate datasets
d1, o1 = make_blobs(n_samples = 100, n_features = 2, centers = 1, random_state=0, cluster_std = 0.5)
d2, o2 = make_blobs(n_samples = 100, n_features = 2, centers = 1, cluster_std = 0.5)
d3, o3 = make_blobs(n_samples = 100, n_features = 2, centers = 1, cluster_std = 0.5)
d4, o4 = make_blobs(n_samples = 100, n_features = 2, centers = 1, cluster_std = 0.5)
X = np.vstack((d1,d2,d3,d4))
y = np.hstack((np.repeat(0,100), np.repeat(1,100), np.repeat(2,100), np.repeat(3,100))).T
print('0 - Red, 1 - Green, 2 - Blue, 3 - Yellow')
cols = np.hstack((np.repeat('r',100), np.repeat('g',100), np.repeat('b',100), np.repeat('y',100))).T
svm_ovr = SVC(kernel='linear', gamma='auto', decision_function_shape='ovr')
svm_ovr.fit(X, y)
svm_ovo = SVC(kernel='linear', gamma='auto', decision_function_shape='ovo')
svm_ovo.fit(X, y)
print('OVR Configuration Costs for 4 Class Classification Data:')
print('Cost: ' + str(svm_ovr.decision_function([[2,2]])))
print('Prediction: ' + str(svm_ovr.predict([[2,2]])))
print('No. Support Vectors: ' + str(svm_ovr.n_support_))
print('OVO Configuration Costs for 4 Class Classification Data:')
print('Cost: ' + str(svm_ovo.decision_function([[2,2]])))
print('Prediction: ' + str(svm_ovo.predict([[2,2]])))
print('No. Support Vectors: ' + str(svm_ovo.n_support_))
The Output for the snippet is:
OVR Configuration Costs for 4 Class Classification Data:
Cost: [[ 3.23387565 0.77664387 -0.17878109 2.15179802]]
Prediction: [0]
No. Support Vectors: [2 4 1 3]
OVO Configuration Costs for 4 Class Classification Data:
Cost: [[ 0.68740472 0.77724567 0.88685872 0.14910583 -1.49263233 -0.23041644]]
Prediction: [0]
I am guessing that in OVR scene the highest cost value of 3.23 is seen for 0 vs Rest Model. This suggests that prediction should be 0.
Can you please explain, how SVC predicted the class for test point as 0, based on the cost values for the 6 models in OVO case.
Answers:
In OVO
model you build a binary classifier for every possible pair of classes which results in nC2
models being build, where n
is the total number of classes which is 4
in your case (therefore you build 6
models).
In OVO
the models are build in a lexicographical order i.e.
Model 1 = Class 1 Vs Class 2
Model 2 = Class 1 Vs Class 3
Model 3 = Class 1 Vs Class 4
Model 4 = Class 2 Vs Class 3
Model 5 = Class 2 Vs Class 4
Model 6 = Class 3 Vs Class 4
So the decision function is a 6
dimensional array and each element corresponds to the distance of the point from the separating hyper plane of that model, and a negative value indicates that the point is on the other side of the hyper plane :
Cost: [[ 0.68740472 0.77724567 0.88685872 0.14910583 -1.49263233 -0.23041644]]
So with the decision function array, you can make a prediction for each model based on the value. So your prediction goes like this:
Model 1 = Class 1
Model 2 = Class 1
Model 3 = Class 1
Model 4 = Class 2
Model 5 = Class 4
Model 6 = Class 4
Now you can just take the majority vote of the models and give it as the prediction which turns out to be class 1
in this case.
I am trying to understand how decision_function value is to be interpreted in multi-class classification scenario using One Vs One approach. I have created a 2D sample data with 100 samples for 4 classes each, which is stored in X & y variables. Here is the code:
# 4 Classes - Make 4 separate datasets
d1, o1 = make_blobs(n_samples = 100, n_features = 2, centers = 1, random_state=0, cluster_std = 0.5)
d2, o2 = make_blobs(n_samples = 100, n_features = 2, centers = 1, cluster_std = 0.5)
d3, o3 = make_blobs(n_samples = 100, n_features = 2, centers = 1, cluster_std = 0.5)
d4, o4 = make_blobs(n_samples = 100, n_features = 2, centers = 1, cluster_std = 0.5)
X = np.vstack((d1,d2,d3,d4))
y = np.hstack((np.repeat(0,100), np.repeat(1,100), np.repeat(2,100), np.repeat(3,100))).T
print('0 - Red, 1 - Green, 2 - Blue, 3 - Yellow')
cols = np.hstack((np.repeat('r',100), np.repeat('g',100), np.repeat('b',100), np.repeat('y',100))).T
svm_ovr = SVC(kernel='linear', gamma='auto', decision_function_shape='ovr')
svm_ovr.fit(X, y)
svm_ovo = SVC(kernel='linear', gamma='auto', decision_function_shape='ovo')
svm_ovo.fit(X, y)
print('OVR Configuration Costs for 4 Class Classification Data:')
print('Cost: ' + str(svm_ovr.decision_function([[2,2]])))
print('Prediction: ' + str(svm_ovr.predict([[2,2]])))
print('No. Support Vectors: ' + str(svm_ovr.n_support_))
print('OVO Configuration Costs for 4 Class Classification Data:')
print('Cost: ' + str(svm_ovo.decision_function([[2,2]])))
print('Prediction: ' + str(svm_ovo.predict([[2,2]])))
print('No. Support Vectors: ' + str(svm_ovo.n_support_))
The Output for the snippet is:
OVR Configuration Costs for 4 Class Classification Data:
Cost: [[ 3.23387565 0.77664387 -0.17878109 2.15179802]]
Prediction: [0]
No. Support Vectors: [2 4 1 3]
OVO Configuration Costs for 4 Class Classification Data:
Cost: [[ 0.68740472 0.77724567 0.88685872 0.14910583 -1.49263233 -0.23041644]]
Prediction: [0]
I am guessing that in OVR scene the highest cost value of 3.23 is seen for 0 vs Rest Model. This suggests that prediction should be 0.
Can you please explain, how SVC predicted the class for test point as 0, based on the cost values for the 6 models in OVO case.
In OVO
model you build a binary classifier for every possible pair of classes which results in nC2
models being build, where n
is the total number of classes which is 4
in your case (therefore you build 6
models).
In OVO
the models are build in a lexicographical order i.e.
Model 1 = Class 1 Vs Class 2
Model 2 = Class 1 Vs Class 3
Model 3 = Class 1 Vs Class 4
Model 4 = Class 2 Vs Class 3
Model 5 = Class 2 Vs Class 4
Model 6 = Class 3 Vs Class 4
So the decision function is a 6
dimensional array and each element corresponds to the distance of the point from the separating hyper plane of that model, and a negative value indicates that the point is on the other side of the hyper plane :
Cost: [[ 0.68740472 0.77724567 0.88685872 0.14910583 -1.49263233 -0.23041644]]
So with the decision function array, you can make a prediction for each model based on the value. So your prediction goes like this:
Model 1 = Class 1
Model 2 = Class 1
Model 3 = Class 1
Model 4 = Class 2
Model 5 = Class 4
Model 6 = Class 4
Now you can just take the majority vote of the models and give it as the prediction which turns out to be class 1
in this case.