How to get both MSE and R2 from a sklearn GridSearchCV?

Question:

I can use a GridSearchCV on a pipeline and specify scoring to either be 'MSE' or 'R2'. I can then access gridsearchcv.best_score_ to recover the one I specified. How do I also get the other score for the solution found by GridSearchCV?

If I run GridSearchCV again with the other scoring parameter, it might not find the same solution, and so the score it reports might not correspond to the same model as the one for which we have the first value.

Maybe I can extract the parameters and supply them to a new pipeline, and then run cross_val_score with the new pipeline? Is there a better way? Thanks.

Asked By: rhombidodecahedron

||

Answers:

This is unfortunately not straightforward right now with GridSearchCV, or any built in sklearn method/object.

Although there is talk of having multiple scorer outputs, this feature will probably not come soon.

So you will have to do it yourself, there are several ways:

1) You can take a look at the code of cross_val_score and perform the cross validation loop yourself, calling the scorers of interest once each fold is done.

2) [not recommended] You can also build your own scorer out of the scorers you are interested in and have them output the scores as an array. You will then find yourself with the problem explained here:
sklearn – Cross validation with multiple scores

3) Since you can code your own scorers, you could make a scorer that outputs one of your scores (the one by which you want GridSearchCV to make decisions), and which stores all the other scores you are interested in in a separate place, which may be a static/global variable, or even a file.

Number 3 seems the least tedious and most promising:

import numpy as np
from sklearn.metrics import r2_score, mean_squared_error
secret_mses = []

def r2_secret_mse(estimator, X_test, y_test):
    predictions = estimator.predict(X_test)
    secret_mses.append(mean_squared_error(y_test, predictions))
    return r2_score(y_test, predictions)

X = np.random.randn(20, 10)
y = np.random.randn(20)

from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import Ridge

r2_scores = cross_val_score(Ridge(), X, y, scoring=r2_secret_mse, cv=5)

You will find the R2 scores in r2_scores and the corresponding MSEs in secret_mses.

Note that this can become messy if you go parallel. In that case you would need to write the scores to a specific place in a memmap for example.

Answered By: eickenberg

Added in Scikit-learn 0.19

Multi-metric scoring has been introduced in GridSearchCV.
An extensive example can be found here.

When performing multi-metric scoring, you should provide 2 extra arguments:

  1. A list of all the metrics that you want to use for scoring.

For evaluating multiple metrics, either give a list of (unique) strings or a dict with names as keys and callables as values.
2. Since you can’t maximize all metrics at once, you need to provide a single metric (or a custom combination of metrics) for which you want to optimize. This is provided as the refit argument.
For multiple metric evaluation, this needs to be a string denoting the scorer that would be used to find the best parameters for refitting the estimator at the end.

Where there are considerations other than maximum score in choosing a best estimator, refit can be set to a function which returns the selected best_index_ given cv_results_.

In your case, you would want to use something like

cv=GridSearchCV(DecisionTreeClassifier(random_state=42),
                  param_grid={'min_samples_split': range(2, 403, 10)},
                  scoring=['neg_mean_squared_error', 'r2'], cv=5, refit='r2')
cv.fit(x,y)

You can then analyse the detailed performance with:

cv.cv_results_
Answered By: Ivo Merchiers