Can sklearn random forest directly handle categorical features?

Question:

Say I have a categorical feature, color, which takes the values

[‘red’, ‘blue’, ‘green’, ‘orange’],

and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn’t include any of them.

I’ve heard that there’s no way to do this, but I’d imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.

Asked By: hahdawg

||

Answers:

No, there isn’t. Somebody’s working on this and the patch might be merged into mainline some day, but right now there’s no support for categorical variables in scikit-learn except dummy (one-hot) encoding.

Answered By: Fred Foo

You have to make the categorical variable into a series of dummy variables. Yes I know its annoying and seems unnecessary but that is how sklearn works.
if you are using pandas. use pd.get_dummies, it works really well.

Answered By: Hemanth Kondapalli

Most implementations of random forest (and many other machine learning algorithms) that accept categorical inputs are either just automating the encoding of categorical features for you or using a method that becomes computationally intractable for large numbers of categories.

A notable exception is H2O. H2O has a very efficient method for handling categorical data directly which often gives it an edge over tree based methods that require one-hot-encoding.

This article by Will McGinnis has a very good discussion of one-hot-encoding and alternatives.

This article by Nick Dingwall and Chris Potts has a very good discussion about categorical variables and tree based learners.

Answered By: denson

Maybe you can use 1~4 to replace these four color, that is, it is the number rather than the color name in that column. And then the column with number can be used in the models

Answered By: user15483190

You can directly feed categorical variables to random forest using below approach:

  1. Firstly convert categories of feature to numbers using sklearn label encoder
  2. Secondly convert label encoded feature type to string(object)
le=LabelEncoder()
df[col]=le.fit_transform(df[col]).astype('str')

above code will solve your problem

Answered By: Partha sarthy

No.
There are 2 types of categorical features:

  1. Ordinal: use OrdinalEncoder
  2. Cardinal: use LabelEncoder or OnehotEncoder

Note: Differences between LabelEncoder & OnehotEncoder:

  1. Label: only for one column => usually we use it to encode the label
    column (i.e., the target column)
  2. Onehot: for multiple columns => can handle more features at one time
Answered By: frr0717