How to log Python code memory consumption?
Question:
Question
Hi, I am runnin’ a Docker container with a Python application inside. The code performs some computing tasks and I would like to monitor it’s memory consumption using logs (so I can see how different parts of the calculations perform). I do not need any charts or continous monitoring – I am okay with the inaccuracy of this approach.
How should I do it without loosing performance?
Using external (AWS) tools to monitor used memory is not suitable, because I often debug using logs and thus it’s very difficult to match logs with performance charts. Also the resolution is too small.
Setup
- using
python:3.10
as base docker image
- using
Python 3.10
- running in AWS ECS Fargate (but results are similar while testing on local)
- running the calculation method using
asyncio
I have read some articles about tracemalloc
, but it says it degrades the performance a lot (around 30 %
). The article.
Tried methods
I have tried the following method, however it shows the same memory usage every time called. So I doubt it works the desired way.
Using resource
import asyncio
import resource
# Local imports
from utils import logger
def get_usage():
usage = round(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1000 / 1000, 4)
logger.info(f"Current memory usage is: {usage} MB")
return usage
# Do calculation - EXAMPLE
asyncio.run(
some_method_to_do_calculations()
)
Using psutil
(in testing)
import psutil
# Local imports
from utils import logger
def get_usage():
total = round(psutil.virtual_memory().total / 1000 / 1000, 4)
used = round(psutil.virtual_memory().used / 1000 / 1000, 4)
pct = round(used / total * 100, 1)
logger.info(f"Current memory usage is: {used} / {total} MB ({pct} %)")
return True
Answers:
It seems like using psutil
fits my needs pretty good. Thanks to all commenters!
Example
import psutil
# Local imports
from utils import logger
def get_usage():
total = round(psutil.virtual_memory().total / 1000 / 1000, 4)
used = round(psutil.virtual_memory().used / 1000 / 1000, 4)
pct = round(used / total * 100, 1)
logger.info(f"Current memory usage is: {used} / {total} MB ({pct} %)")
return True
Fargate is using cgroup for memory limiting.
As mentioned here and here, the CPU/memory values provided by /proc
refer to the host, not the container.
As a result, userspace tools such as top
and free
report misleading values.
You can try with something like:
with open('/sys/fs/cgroup/memory/memory.stat', 'r') as f:
for line in f:
if 'hierarchical_memory_limit ' in line:
memory_limit = int(line.split()[1])
if 'total_rss ' in line:
memory_usage = int(line.split()[1])
percentage=memory_usage*100/memory_limit
Question
Hi, I am runnin’ a Docker container with a Python application inside. The code performs some computing tasks and I would like to monitor it’s memory consumption using logs (so I can see how different parts of the calculations perform). I do not need any charts or continous monitoring – I am okay with the inaccuracy of this approach.
How should I do it without loosing performance?
Using external (AWS) tools to monitor used memory is not suitable, because I often debug using logs and thus it’s very difficult to match logs with performance charts. Also the resolution is too small.
Setup
- using
python:3.10
as base docker image - using
Python 3.10
- running in AWS ECS Fargate (but results are similar while testing on local)
- running the calculation method using
asyncio
I have read some articles about tracemalloc
, but it says it degrades the performance a lot (around 30 %
). The article.
Tried methods
I have tried the following method, however it shows the same memory usage every time called. So I doubt it works the desired way.
Using resource
import asyncio
import resource
# Local imports
from utils import logger
def get_usage():
usage = round(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1000 / 1000, 4)
logger.info(f"Current memory usage is: {usage} MB")
return usage
# Do calculation - EXAMPLE
asyncio.run(
some_method_to_do_calculations()
)
Using psutil
(in testing)
import psutil
# Local imports
from utils import logger
def get_usage():
total = round(psutil.virtual_memory().total / 1000 / 1000, 4)
used = round(psutil.virtual_memory().used / 1000 / 1000, 4)
pct = round(used / total * 100, 1)
logger.info(f"Current memory usage is: {used} / {total} MB ({pct} %)")
return True
It seems like using psutil
fits my needs pretty good. Thanks to all commenters!
Example
import psutil
# Local imports
from utils import logger
def get_usage():
total = round(psutil.virtual_memory().total / 1000 / 1000, 4)
used = round(psutil.virtual_memory().used / 1000 / 1000, 4)
pct = round(used / total * 100, 1)
logger.info(f"Current memory usage is: {used} / {total} MB ({pct} %)")
return True
Fargate is using cgroup for memory limiting.
As mentioned here and here, the CPU/memory values provided by /proc
refer to the host, not the container.
As a result, userspace tools such as top
and free
report misleading values.
You can try with something like:
with open('/sys/fs/cgroup/memory/memory.stat', 'r') as f:
for line in f:
if 'hierarchical_memory_limit ' in line:
memory_limit = int(line.split()[1])
if 'total_rss ' in line:
memory_usage = int(line.split()[1])
percentage=memory_usage*100/memory_limit