Celery (Django + Redis) task fails: "No connection could be made because the target machine actively refused it"

Question:

UPDATE: I decided to try using Django as the broker for simplicity, as I assumed I did something wrong in the Redis setup. However, after making the changes described in the docs I get the same error as below when attempting to run a Celery task with .delay(). The Celery worker starts and shows it’s connected to Django for transport. Could this be a firewall issue?

ORIGINAL

I’m working on a Django project and attempting to add background tasks. I’ve installed Celery and chosen Redis for the broker, and installed that as well (I’m on a Windows machine, fyi). The celery worker starts, connects to the Redis server, and discovers my shared_tasks

 -------------- celery@GALACTICA v3.1.19 (Cipater)
---- **** -----
--- * ***  * -- Windows-7-6.1.7601-SP1
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         proj:0x2dbf970
- ** ---------- .> transport:   redis://localhost:6379/0
- ** ---------- .> results:     disabled
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ----
--- ***** ----- [queues]
 -------------- .> celery           exchange=celery(direct) key=celery


[tasks]
  . app.tasks.add
  . app.tasks.mul
  . app.tasks.xsum
  . proj.celery.debug_task

[2016-01-16 11:53:05,586: INFO/MainProcess] Connected to redis://localhost:6379/
0
[2016-01-16 11:53:06,611: INFO/MainProcess] mingle: searching for neighbors
[2016-01-16 11:53:09,628: INFO/MainProcess] mingle: all alone
c:python34libsite-packagesceleryfixupsdjango.py:265: UserWarning: Using se
ttings.DEBUG leads to a memory leak, never use this setting in production enviro
nments!
  warnings.warn('Using settings.DEBUG leads to a memory leak, never '

[2016-01-16 11:53:14,670: WARNING/MainProcess] c:python34libsite-packagescel
eryfixupsdjango.py:265: UserWarning: Using settings.DEBUG leads to a memory le
ak, never use this setting in production environments! warnings.warn('Using settings.DEBUG leads to a memory leak, never '

[2016-01-16 11:53:14,671: WARNING/MainProcess] celery@GALACTICA ready.

I’m following the intro docs so the tasks are very simple, including one called add. I can run the tasks by themselves in a python shell, but when I attempt to call add.delay() to have celery handle it, it appears the connection isn’t successful:

>>> add.delay(2,2)
Traceback (most recent call last):
File "C:Python34libsite-packageskombuutils__init__.py", line 423, in __call__
return self.__value__
AttributeError: 'ChannelPromise' object has no attribute '__value__'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:Python34libsite-packageskombuconnection.py", line 436, in _ensured
return fun(*args, **kwargs)
File "C:Python34libsite-packageskombumessaging.py", line 177, in _publish
channel = self.channel
File "C:Python34libsite-packageskombumessaging.py", line 194, in _get_channel
channel = self._channel = channel()
File "C:Python34libsite-packageskombuutils__init__.py", line 425, in __call__
value = self.__value__ = self.__contract__()
File "C:Python34libsite-packageskombumessaging.py", line 209, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)    File "C:Python34libsite-packageskombuconnection.py", line 756, in default_channel
self.connection
File "C:Python34libsite-packageskombuconnection.py", line 741, in connection
self._connection = self._establish_connection()
File "C:Python34libsite-packageskombuconnection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "C:Python34libsite-packageskombutransportpyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "C:Python34libsite-packagesamqpconnection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "C:Python34libsite-packagesamqpconnection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "C:Python34libsite-packagesamqptransport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "C:Python34libsite-packagesamqptransport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [WinError 10061] No connection could be made because the target machine actively refused it

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:Python34libsite-packagesceleryapptask.py", line 453, in delay
return self.apply_async(args, kwargs)
File "C:Python34libsite-packagesceleryapptask.py", line 560, in apply_async
**dict(self._get_exec_options(), **options)
File "C:Python34libsite-packagesceleryappbase.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "C:Python34libsite-packagesceleryappamqp.py", line 305, in publish_task
**kwargs
File "C:Python34libsite-packageskombumessaging.py", line 172, in publish
routing_key, mandatory, immediate, exchange, declare)
File "C:Python34libsite-packageskombuconnection.py", line 457, in _ensured
interval_max)
File "C:Python34libsite-packageskombuconnection.py", line 369, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "C:Python34libsite-packageskombuutils__init__.py", line 246, in retry_over_time
return fun(*args, **kwargs)
File "C:Python34libsite-packageskombuconnection.py", line 237, in connect
return self.connection
File "C:Python34libsite-packageskombuconnection.py", line 741, in connection
self._connection = self._establish_connection()
File "C:Python34libsite-packageskombuconnection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "C:Python34libsite-packageskombutransportpyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "C:Python34libsite-packagesamqpconnection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "C:Python34libsite-packagesamqpconnection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "C:Python34libsite-packagesamqptransport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "C:Python34libsite-packagesamqptransport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [WinError 10061] No connection could be made because the target machine actively refused it

There’s no output on the console with the celery worker running, so I don’t think it ever gets the task. I believe my settings.py, celery.py and tasks.py are alright:

settings.py:

#celery settings
BROKER_URL = 'redis://localhost:6379/0'

celery.py:

from __future__ import absolute_import

import os

from celery import Celery

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')

from django.conf import settings  # noqa

app = Celery('proj')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)


@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

tasks.py:

from __future__ import absolute_import

#from proj.celery import app
from celery import shared_task


@shared_task
def add(x, y):
  return x + y


@shared_task
def mul(x, y):
  return x * y


@shared_task
def xsum(numbers):
  return sum(numbers)

My project layout is nearly identical to the Celery example Django project layout on GitHub, as well as the example here. It looks like:

proj
├── proj
│   ├── celery.py       
│   ├── __init__.py     
│   ├── settings.py     
│   ├── urls.py
│   └── wsgi.py
├── manage.py
└── app
    ├── __init__.py
    ├── models.py
    ├── tasks.py        
    ├── tests.py
    └── views.py

Apologies on the other app in my project being named ‘app’ – it makes things a bit confusing to read, and is the result of autogenerating the base project in Visual Studio with PTVS installed. I probably could have changed it early on, but i didn’t realize the name was so vague.

Thanks for any thoughts- I’ve been stumped by this for a while.

Asked By: dkhaupt

||

Answers:

I got around this, but I’m not sure how. I came back to this exact configuration the next day, and tasks were making it to the celery worker.

Perhaps one of the services I restarted was the key, but I’m not sure.

If anyone else runs into this, especially on Windows: make sure your redis-server is active and that you see the incoming connections from a ping as well as the task. I had done that before posting this question, but it seems like the likely candidate for being misconfigured.

Answered By: dkhaupt

I was getting the same error after scrolling all over the internet i got no solution because i forgot to add the following code:

from .celery import app as celery_app

__all__ = ('celery_app',) 

to my __init.py file of project directory got my error resolved

Answered By: Prashant Arya

Redis won’t be started yet after installing it. So, starting Redis will solve your problem.

Answered By: Kai – Kazuya Ito