How can I stop the log output of lightgbm?
Question:
I would like to know how to stop lightgbm logging.
What kind of settings should I use to stop the log?
Also, is there a way to output only your own log with the lightgbm log stopped?
Answers:
I think you can disable lightgbm logging using verbose=-1
in both Dataset constructor and train function, as mentioned here
Follow these points.
- Use "verbose= False" in "fit" method.
- Use "verbose= -100" when you call the classifier.
- Keep "silent = True" (default).
Updated answer for 2024 (lightgbm>=4.3.0
), describing how to suppress all log output from lightgbm
(the Python package for LightGBM).
Answer
If using estimators from lightgbm.sklearn
estimators:
- pass
verbosity=-1
to estimator constructor
If using lightgbm.train()
, lightgbm.cv()
, lightgbm.Dataset()
:
- pass
"verbosity": -1
through params
keyword argument
Both interfaces emit a UserWarning
if they encounter a conflict between keyword arguments and configuration passed other ways (**kwargs
for scikit-learn, params
dictionary for cv()
/ train()
. To suppress those too:
- use
warnings.filterwarnings("ignore", category=UserWarning)
Examples
Using Python 3.11, lightgbm==4.3.0
, and scikit==1.4.1
, with the following imports and setup for all examples:
import lightgbm as lgb
import warnings
from sklearn.datasets import make_regression
# create datasets
X, y = make_regression(
n_samples=10_000,
n_features=10,
)
This scikit-learn
example with no explicit verbosity control…
model = lgb.LGBMRegressor(
num_boost_round=10,
num_leaves=31,
).fit(X, y)
…produces this output…
.../python3.11/site-packages/lightgbm/engine.py:172: UserWarning: Found `num_boost_round` in params. Will use it instead of argument
_log_warning(f"Found `{alias}` in params. Will use it instead of argument")
[LightGBM] [Warning] num_iterations is set=10, num_boost_round=10 will be ignored. Current value: num_iterations=10
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000106 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 2550
[LightGBM] [Info] Number of data points in the train set: 10000, number of used features: 10
[LightGBM] [Info] Start training from score -1.188321
…but with verbosity controls…
warnings.filterwarnings("ignore", category=UserWarning)
model = lgb.LGBMRegressor(
num_boost_round=10,
num_leaves=31,
verbosity=-1
).fit(X, y)
… does not produce any log output.
Similarly, using train()
with no verbosity controls …
model = lgb.train(
train_set=lgb.Dataset(X, label=y),
params={
"objective": "regression",
"num_iterations": 10,
"num_leaves": 31
}
)
… produces this output …
.../python3.11/site-packages/lightgbm/engine.py:172: UserWarning: Found `num_iterations` in params. Will use it instead of argument
_log_warning(f"Found `{alias}` in params. Will use it instead of argument")
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000136 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 2550
[LightGBM] [Info] Number of data points in the train set: 10000, number of used features: 10
[LightGBM] [Info] Start training from score -1.188321
…but with verbosity controls…
warnings.filterwarnings("ignore", category=UserWarning)
model = lgb.train(
train_set=lgb.Dataset(X, label=y, params={"verbosity": -1}),
params={
"objective": "regression",
"num_iterations": 10,
"num_leaves": 31,
"verbosity": -1
}
)
… does not produce any log output.
In the list of parameters, which can be passed to LGBMClassifier or LGBMRegressor, there are no parameters responsible for logging – "verbose" or something like that. But using "*kwargs", this can be done – suppressing the output "verbosity(or verbose)=-1". The whole list of "*kwargs": https://lightgbm.readthedocs.io/en/latest/Parameters.html.
LGBMClassifier(verbosity=-1, random_state=42)
I would like to know how to stop lightgbm logging.
What kind of settings should I use to stop the log?
Also, is there a way to output only your own log with the lightgbm log stopped?
I think you can disable lightgbm logging using verbose=-1
in both Dataset constructor and train function, as mentioned here
Follow these points.
- Use "verbose= False" in "fit" method.
- Use "verbose= -100" when you call the classifier.
- Keep "silent = True" (default).
Updated answer for 2024 (lightgbm>=4.3.0
), describing how to suppress all log output from lightgbm
(the Python package for LightGBM).
Answer
If using estimators from lightgbm.sklearn
estimators:
- pass
verbosity=-1
to estimator constructor
If using lightgbm.train()
, lightgbm.cv()
, lightgbm.Dataset()
:
- pass
"verbosity": -1
throughparams
keyword argument
Both interfaces emit a UserWarning
if they encounter a conflict between keyword arguments and configuration passed other ways (**kwargs
for scikit-learn, params
dictionary for cv()
/ train()
. To suppress those too:
- use
warnings.filterwarnings("ignore", category=UserWarning)
Examples
Using Python 3.11, lightgbm==4.3.0
, and scikit==1.4.1
, with the following imports and setup for all examples:
import lightgbm as lgb
import warnings
from sklearn.datasets import make_regression
# create datasets
X, y = make_regression(
n_samples=10_000,
n_features=10,
)
This scikit-learn
example with no explicit verbosity control…
model = lgb.LGBMRegressor(
num_boost_round=10,
num_leaves=31,
).fit(X, y)
…produces this output…
.../python3.11/site-packages/lightgbm/engine.py:172: UserWarning: Found `num_boost_round` in params. Will use it instead of argument
_log_warning(f"Found `{alias}` in params. Will use it instead of argument")
[LightGBM] [Warning] num_iterations is set=10, num_boost_round=10 will be ignored. Current value: num_iterations=10
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000106 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 2550
[LightGBM] [Info] Number of data points in the train set: 10000, number of used features: 10
[LightGBM] [Info] Start training from score -1.188321
…but with verbosity controls…
warnings.filterwarnings("ignore", category=UserWarning)
model = lgb.LGBMRegressor(
num_boost_round=10,
num_leaves=31,
verbosity=-1
).fit(X, y)
… does not produce any log output.
Similarly, using train()
with no verbosity controls …
model = lgb.train(
train_set=lgb.Dataset(X, label=y),
params={
"objective": "regression",
"num_iterations": 10,
"num_leaves": 31
}
)
… produces this output …
.../python3.11/site-packages/lightgbm/engine.py:172: UserWarning: Found `num_iterations` in params. Will use it instead of argument
_log_warning(f"Found `{alias}` in params. Will use it instead of argument")
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000136 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 2550
[LightGBM] [Info] Number of data points in the train set: 10000, number of used features: 10
[LightGBM] [Info] Start training from score -1.188321
…but with verbosity controls…
warnings.filterwarnings("ignore", category=UserWarning)
model = lgb.train(
train_set=lgb.Dataset(X, label=y, params={"verbosity": -1}),
params={
"objective": "regression",
"num_iterations": 10,
"num_leaves": 31,
"verbosity": -1
}
)
… does not produce any log output.
In the list of parameters, which can be passed to LGBMClassifier or LGBMRegressor, there are no parameters responsible for logging – "verbose" or something like that. But using "*kwargs", this can be done – suppressing the output "verbosity(or verbose)=-1". The whole list of "*kwargs": https://lightgbm.readthedocs.io/en/latest/Parameters.html.
LGBMClassifier(verbosity=-1, random_state=42)