LightGBM on Numerical+Categorical+Text Features >> TypeError: Unknown type of parameter:boosting_type, got:dict

Question:

Im trying to train a lightGBM model on a dataset consisting of numerical, Categorical and Textual data. However, during the training phase, i get the following error:

params = {
'num_class':5,
'max_depth':8,
'num_leaves':200,
'learning_rate': 0.05,
'n_estimators':500
}

clf = LGBMClassifier(params)
data_processor = ColumnTransformer([
    ('numerical_processing', numerical_processor, numerical_features),
    ('categorical_processing', categorical_processor, categorical_features),
    ('text_processing_0', text_processor_1, text_features[0]),
    ('text_processing_1', text_processor_1, text_features[1])
                                    ]) 
pipeline = Pipeline([
    ('data_processing', data_processor),
    ('lgbm', clf)
                    ])
pipeline.fit(X_train, y_train)

and the error is:

TypeError: Unknown type of parameter:boosting_type, got:dict

Here’s my pipeline:
enter image description here

I basically have two textual features, both are some form of names on which im performing stemming mainly .

Any pointers would be highly appreciated.

Asked By: redwolf_cr7

||

Answers:

You are setting up the classifier wrongly, this is giving you the error and you can easily try this before going to the pipeline:

params = {
'num_class':5,
'max_depth':8,
'num_leaves':200,
'learning_rate': 0.05,
'n_estimators':500
}

clf = LGBMClassifier(params)
clf.fit(np.random.uniform(0,1,(50,2)),np.random.randint(0,5,50))

Gives you the same error:

TypeError: Unknown type of parameter:boosting_type, got:dict

You can set up the classifier like this:

clf = LGBMClassifier(**params)

Then using an example, you can see it runs:

from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer

numerical_processor = StandardScaler()
categorical_processor = OneHotEncoder()
numerical_features = ['A']
categorical_features = ['B']

data_processor = ColumnTransformer([('numerical_processing', numerical_processor, numerical_features),
('categorical_processing', categorical_processor, categorical_features)])

X_train = pd.DataFrame({'A':np.random.uniform(100),
'B':np.random.choice(['j','k'],100)})

y_train = np.random.randint(0,5,100)

pipeline = Pipeline([('data_processing', data_processor),('lgbm', clf)])

pipeline.fit(X_train, y_train)
Answered By: StupidWolf

The error is coming because of some internal implementation, because of which the boosting_type is getting a value in the datatype of dict which is not acceptable for this attribute, as you can see from the official documentation:
https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html

Note: Here I have given the example of LGBMRegressor, but the same holds true for LGBMclassifier.

So to override this value for boosting_type, I changed the value under the __init__ function:

model=LGBMRegressor(param_grid,metric='rmse')
        
model.__init__(boosting_type='gbdt')

SO that the value for the attribute "boosting_type" gets overridden. If you wish to check my whole code, following is my code:

from optuna.integration import LightGBMPruningCallback
import lightgbm
import optuna
from sklearn.metrics import mean_squared_error
from lightgbm import LGBMRegressor

    def objective(trial, X, y):
        param_grid = {
            "n_estimators": trial.suggest_int("n_estimators", 500,1000),
            "learning_rate": trial.suggest_float("learning_rate", 0.01, 0.3),
            'device':trial.suggest_categorical('device',['gpu']),
            'tweedie_variance_power':trial.suggest_float('tweedie_variance_power',1.0,2.0),
            "num_leaves": trial.suggest_int("num_leaves", 20, 1000, step=20),
            "max_depth": trial.suggest_int("max_depth", 3, 12),
            "min_child_samples": trial.suggest_int("min_data_in_leaf", 1000, 10000, step=100),
            "min_split_gain": trial.suggest_float("min_split_gain", 0, 15),
            #'verbose':trial.suggest_categorical('verbose',[-1]),
        }
    
        cv = TimeSeriesSplit(n_splits=3)
        
        cv_scores = np.empty(3)
        for idx, (train_idx, test_idx) in enumerate(cv.split(X, y)):
            x_test, y_test = X.iloc[test_idx], y.iloc[test_idx]
            
            model=LGBMRegressor(param_grid,metric='rmse')
            
            model.__init__(boosting_type='gbdt')
            
            model.fit(X=X.iloc[train_idx],y=y.iloc[train_idx],eval_set=[(x_test, y_test)],eval_metric=['rmse'],
                      callbacks=[lightgbm.early_stopping(15),LightGBMPruningCallback(trial, "rmse"),])
            # 
            #,eval_set=(x_test, y_test)
            preds = model.predict(x_test)
            res=np.sqrt(mean_squared_error(preds, y_test))
            print('Test RMSE',res)
            cv_scores[idx] = res
    #LightGBMPruningCallback(trial, "rmse"),
        return np.mean(cv_scores)
Answered By: Joseph J