python logging in AWS Fargate, datetime duplicated

Question:

I’m trying to use python logging module in AWS Fargate. The same application should work also locally, so I’d like to use a custom logger for local use but to keep intact cloudwatch logs.
This is what I’m doing:

if logging.getLogger().hasHandlers():
    log = logging.getLogger()
    log.setLevel(logging.INFO)
else:
    from logging.handlers import RotatingFileHandler
    log = logging.getLogger('sm')
    log.root.setLevel(logging.INFO)
    ...

But I get this in cloudwatch:

2023-02-08T13:06:27.317+01:00   08/02/2023 12:06 - sm - INFO - Starting

And this locally:

08/02/2023 12:06 - sm - INFO - Starting

I thought Fargate was already defining a logger, but apparently the following has no effect:

logging.getLogger().hasHandlers()

Ideally this should be the desired log in cloudwatch:

2023-02-08T13:06:27.317+01:00   sm - INFO - Starting
Asked By: rok

||

Answers:

You can use python logging basicconfig to configure the root logger. debug, info, warning, error and critical call basicConfig automatically if no handlers are defined.

logging.basicConfig(filename='test.log', format='%(filename)s: %(message)s',
level=logging.DEBUG)

set the logging format to include details which are required as args

logging.basicConfig(format='%(asctime)s   %(name)s - %(levelname)s - %(message)s', level=logging.INFO)

use this to format logs in cloudwatch. Found one stackoverflow answer with some detailed explanation here

Answered By: Bijendra

Fargate just runs docker containers. It doesn’t do any setup of your Python code that happens to be running in that docker container for you. It doesn’t even know or care that you are running Python code.

Anything written to STDOUT/STDERR by the primary process of the docker container gets sent to CloudWatch Logs, so if you want to be compatible with ECS CloudWatch Logs just make sure you are sending logs in the format you want to the console.

Answered By: Mark B