Python best practice in terms of logging
Question:
When using the logging
module from python for logging purposes. Is it best-practice to define a logger for each class?
Considering some things would be redundant such as file log location, I was thinking of abstracting logging to its own class and import an instance into each of my classes requiring logging. However I’m not sure if this is best practice or not?
Answers:
Best practice is to follow Python’s rules for software (de)composition – the module is the unit of Python software, not the class. Hence, the recommended approach is to use
logger = logging.getLogger(__name__)
in each module, and to configure logging (using basicConfig()
or dictConfig()
) from the main script.
Loggers are singletons – there is no point in passing them around or storing them in instances of your classes.
Use JSON or YAML logging configuration – After Python 2.7, you can load logging configuration from a dict. It means you can load the logging configuration from a JSON or YAML file.
Yaml Example –
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: simple
filename: info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
error_file_handler:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: simple
filename: errors.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
loggers:
my_module:
level: ERROR
handlers: [console]
propagate: no
root:
level: INFO
handlers: [console, info_file_handler, error_file_handler]
Use structured logging. Two great tools for this:
- Eliot: Logging that tells you why it happened
Most logging systems tell you what happened in your application,
whereas eliot also tells you why it happened.
eliot is a Python logging system that outputs causal chains of
actions: actions can spawn other actions, and eventually they either
succeed or fail. The resulting logs tell you the story of what your
software did: what happened, and what caused it.
- Structlog: structlog makes logging in Python less painful and more powerful by adding structure to your log entries.
Structured logging means that you don’t write hard-to-parse and
hard-to-keep-consistent prose in your logs but that you log events
that happen in a context instead.
I’ve had very positive experiences with Eliot.
When using the logging
module from python for logging purposes. Is it best-practice to define a logger for each class?
Considering some things would be redundant such as file log location, I was thinking of abstracting logging to its own class and import an instance into each of my classes requiring logging. However I’m not sure if this is best practice or not?
Best practice is to follow Python’s rules for software (de)composition – the module is the unit of Python software, not the class. Hence, the recommended approach is to use
logger = logging.getLogger(__name__)
in each module, and to configure logging (using basicConfig()
or dictConfig()
) from the main script.
Loggers are singletons – there is no point in passing them around or storing them in instances of your classes.
Use JSON or YAML logging configuration – After Python 2.7, you can load logging configuration from a dict. It means you can load the logging configuration from a JSON or YAML file.
Yaml Example –
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: simple
filename: info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
error_file_handler:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: simple
filename: errors.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
loggers:
my_module:
level: ERROR
handlers: [console]
propagate: no
root:
level: INFO
handlers: [console, info_file_handler, error_file_handler]
Use structured logging. Two great tools for this:
- Eliot: Logging that tells you why it happened
Most logging systems tell you what happened in your application,
whereas eliot also tells you why it happened.eliot is a Python logging system that outputs causal chains of
actions: actions can spawn other actions, and eventually they either
succeed or fail. The resulting logs tell you the story of what your
software did: what happened, and what caused it.
- Structlog: structlog makes logging in Python less painful and more powerful by adding structure to your log entries.
Structured logging means that you don’t write hard-to-parse and
hard-to-keep-consistent prose in your logs but that you log events
that happen in a context instead.
I’ve had very positive experiences with Eliot.