Celery difference between concurrency, workers and autoscaling

Question:

In my /etc/defaults/celeryd config file, I’ve set:

CELERYD_NODES="agent1 agent2 agent3 agent4 agent5 agent6 agent7 agent8"
CELERYD_OPTS="--autoscale=10,3 --concurrency=5"

I understand that the daemon spawns 8 celery workers, but I’m fully not sure what autoscale and concurrency do together. I thought that concurrency was a way to specify the max number of threads that a worker can use and autoscale was a way for the worker to scale up and down child workers, if necessary.

The tasks have a largish payload (some 20-50kB) and there are like 2-3 million such tasks, but each task runs in less than a second. I’m seeing memory usage spike up because the broker distributes the tasks to every worker, thus replicating the payload multiple times.

I think the issue is in the config and that the combination of workers + concurrency + autoscaling is excessive and I would like to get a better understanding of what these three options do.

Asked By: Joseph

||

Answers:

Let’s distinguish between workers and worker processes. You spawn a celery worker, this then spawns a number of processes (depending on things like --concurrency and --autoscale, the default is to spawn as many processes as cores on the machine). There is no point in running more than one worker on a particular machine unless you want to do routing.

I would suggest running only 1 worker per machine with the default number of processes. This will reduce memory usage by eliminating the duplication of data between workers.

If you still have memory issues then save the data to a store and pass only an id to the workers.

Answered By: scytale

When using --autoscale the number of processes are set dynamically with max/min values which enable the worker to scale according to load and when using --concurrency processes are set statically with a fixed number. So using these two together makes no sense.

Celery --autoscale is responsible for growing and shrinking the pool dynamically based on load. This in turn adds more processes when there is work to do and removes processes when the workload is low. So for example --autoscale=10,3 would give you a maximum of 10 processes and a minimum of 3 processes.

As for --concurrency celery by default uses multiprocessing to perform concurrent execution of tasks. The number of worker processes/threads can be changed using the --concurrency argument and defaults to the number of available CPU’s if not set. So for example --concurrency=5 would use 5 processes meaning 5 tasks that can run concurrently.

Answered By: WMRamadan
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.