Tasks being repeated in Celery

Question:

After a couple days, my celery service will repeat a task over and over indefinitely. This is somewhat difficult to reproduce, but happens regularly once a week or more frequently depending on the tasks volume being processed.

I will appreciate any tips on how to get more data about this issue, since I don’t know how to trace it. When it occurs, restarting celery will solve it temporarily.

I have one celery node running with 4 workers (version 3.1.23). Broker and result backends are on Redis. I’m posting to one queue only and I don’t use celery beat.

The config in Django’s setting.py is:

BROKER_URL = 'redis://localhost:6380'
CELERY_RESULT_BACKEND = 'redis://localhost:6380'

Relevant part of the log:

[2016-05-28 10:37:21,957: INFO/MainProcess] Received task: painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647]
[2016-05-28 11:37:58,005: INFO/MainProcess] Received task: painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647]
[2016-05-28 13:37:59,147: INFO/MainProcess] Received task: painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647]
...
[2016-05-30 09:27:47,136: INFO/MainProcess] Task painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647] succeeded in 53.33468166703824s: None
[2016-05-30 09:43:08,317: INFO/MainProcess] Task painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647] succeeded in 466.0324719119817s: None
[2016-05-30 09:57:25,550: INFO/MainProcess] Task painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647] succeeded in 642.7634702899959s: None

Tasks are sent by user request with:

tasks.indicar_cliente.delay(indicacao_db.id)

Here’s the source code of the task and the celery service configuration.

Why are the tasks being received multiple times after some time the service is running? How can I get a consistent behavior?

Asked By: rodorgas

||

Answers:

Solved by using rabbitmq broker instead of redis.

Answered By: rodorgas

It might be a bit out of date, but I’ve faced the same problem and fixed it with Redis. Long story short, Celery waits for some time for tasks execution, and if the time has been expired it restarts the task. It is called visibility timeout.
The explanation from the docs:

If a task isn’t acknowledged within the Visibility Timeout the task
will be redelivered to another worker and executed. This causes
problems with ETA/countdown/retry tasks where the time to execute
exceeds the visibility timeout; in fact if that happens it will be
executed again, and again in a loop. So you have to increase the
visibility timeout to match the time of the longest ETA you’re
planning to use. Note that Celery will redeliver messages at worker
shutdown, so having a long visibility timeout will only delay the
redelivery of ‘lost’ tasks in the event of a power failure or
forcefully terminated workers.

Example of the option:
https://docs.celeryproject.org/en/stable/userguide/configuration.html#broker-transport-options

Details:
https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/redis.html#id1

Answered By: Andrey Rusanov

I ran into an issue like this. Raising the Celery visibility timeout was not working.

It turns out that I was also running a Prometheus exporter that instantiated its own Celery object that used the default visibility timeout–therefore canceling out the higher timeout I had placed in my application.

If you have multiple Celery clients–whether they are for submitting tasks, processing tasks, or just observing tasks–make sure that they all have the exact same configuration.

Answered By: James Mishra
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.