uwsgi + nginx + flask: upstream prematurely closed
Question:
I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn’t throw any errors. Uwsgi doesn’t complain.
But when I check nginx’s error.log I see a lot of
2014/12/10 05:06:24 [error] 14084#0: *239436 upstream prematurely
closed connection while reading response header from upstream, client:
34.34.34.34, server: me.com, request: “GET /download/export.csv HTTP/1.1”, upstream: “uwsgi://0.0.0.0:5002”, host: “me.com”, referrer:
“https://me.com/download/export.csv“
I deploy the uwsgi like
uwsgi --socket 0.0.0.0:5002 --buffer-size=32768 --module server --callab app
my nginx config:
server {
listen 80;
merge_slashes off;
server_name me.com www.me.cpm;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
server {
listen 443;
merge_slashes off;
server_name me.com www.me.com;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
Is this an nginx or uwsgi issue, or both?
Answers:
Replace uwsgi_pass 0.0.0.0:5002;
with uwsgi_pass 127.0.0.1:5002;
or better use unix sockets.
Change nginx.conf to include
sendfile on;
client_max_body_size 20M;
keepalive_timeout 0;
See self answer uwsgi upstart on amazon linux for full example
It seems many causes can stand behind this error message. I know you are using uwsgi_pass
, but for those having the problem on long requests when using proxy_pass
, setting http-timeout
on uWSGI may help (it is not harakiri setting).
I fixed this issue by passing socket-timeout = 65
(uwsgi.ini file) or --socket-timeout=65
(uwsgi command line) option in uwsgi. We have to check with different value depends on the web traffic. This value socket-timeout = 65
in uwsgi.ini file worked in my case.
I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:
upstream docker {
server 172.17.0.3:8080;
keepalive 256;
}
With this default upstream simple load test like:
siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'
…on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.
The solution was to either remove keepalive
setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI
‘s side as well, with --http-keepalive
(available since 1.9).
In my case, problem was nginx was sending a request with uwsgi protocol while uwsgi was listening on that port for http packets. So either I had to change the way nginx connects to uwsgi or change the uwsgi to listen using uwsgi protocol.
As mentioned by @mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.
When in the Nginx config you have something like:
upstream org_app {
server 10.0.9.79:9597;
}
location / {
include uwsgi_params;
uwsgi_pass org_app;
}
Nginx will use the uwsgi protocol. But if in uwsgi.ini
you have something like (or its equivalent in the command line):
http-socket=:9597
uwsgi will speak http, and the error mentioned in the question appears. See native HTTP support.
A possible fix is to have instead:
socket=:9597
In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.
Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.
There are many potential causes and solutions for this problem. In my case, the back-end code was taking too long to run. Modifying these variables fixed it for me.
Nginx:
proxy_connect_timeout
, proxy_send_timeout
, proxy_read_timeout
, fastcgi_send_timeout
, fastcgi_read_timeout
, keepalive_timeout
, uwsgi_read_timeout
, uwsgi_send_timeout
, uwsgi_socket_keepalive
.
uWSGI: limit-post
.
I fixed this by reverting to pip3 install uwsgi
.
I was trying out the setup with Ubuntu and Amazon Linux side by side. I initially used a virtual environment and did pip3 install uwsgi
both systems work fine. Later, I did continue the setup with virtual env turned off. On Ubuntu I install using pip3 install uwsgi
and on Amazon Linux yum install uwsgi -y
. That was the source of the problem for me.
Ubuntu works fine, but not the Amazon Linux
The fix,
yum remove uwsgi
and pip3 install uwsgi
restart and it works fine.
This issue can also be caused by a mismatch between timeout values.
I had this issue when nginx had a keepalive_timeout
of 75s, while the upstream server’s value was a few seconds.
This caused the upstream server to close the connection when its timeout was reached, and nginx logged Connection reset by peer
errors.
When having such abrupt "connection closed" errors, please check the upstream timeout values are higher than nginx’ values (see Raphael’s answer for a good list to check)
I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn’t throw any errors. Uwsgi doesn’t complain.
But when I check nginx’s error.log I see a lot of
2014/12/10 05:06:24 [error] 14084#0: *239436 upstream prematurely
closed connection while reading response header from upstream, client:
34.34.34.34, server: me.com, request: “GET /download/export.csv HTTP/1.1”, upstream: “uwsgi://0.0.0.0:5002”, host: “me.com”, referrer:
“https://me.com/download/export.csv“
I deploy the uwsgi like
uwsgi --socket 0.0.0.0:5002 --buffer-size=32768 --module server --callab app
my nginx config:
server {
listen 80;
merge_slashes off;
server_name me.com www.me.cpm;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
server {
listen 443;
merge_slashes off;
server_name me.com www.me.com;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
Is this an nginx or uwsgi issue, or both?
Replace uwsgi_pass 0.0.0.0:5002;
with uwsgi_pass 127.0.0.1:5002;
or better use unix sockets.
Change nginx.conf to include
sendfile on;
client_max_body_size 20M;
keepalive_timeout 0;
See self answer uwsgi upstart on amazon linux for full example
It seems many causes can stand behind this error message. I know you are using uwsgi_pass
, but for those having the problem on long requests when using proxy_pass
, setting http-timeout
on uWSGI may help (it is not harakiri setting).
I fixed this issue by passing socket-timeout = 65
(uwsgi.ini file) or --socket-timeout=65
(uwsgi command line) option in uwsgi. We have to check with different value depends on the web traffic. This value socket-timeout = 65
in uwsgi.ini file worked in my case.
I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:
upstream docker {
server 172.17.0.3:8080;
keepalive 256;
}
With this default upstream simple load test like:
siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'
…on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.
The solution was to either remove keepalive
setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI
‘s side as well, with --http-keepalive
(available since 1.9).
In my case, problem was nginx was sending a request with uwsgi protocol while uwsgi was listening on that port for http packets. So either I had to change the way nginx connects to uwsgi or change the uwsgi to listen using uwsgi protocol.
As mentioned by @mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.
When in the Nginx config you have something like:
upstream org_app {
server 10.0.9.79:9597;
}
location / {
include uwsgi_params;
uwsgi_pass org_app;
}
Nginx will use the uwsgi protocol. But if in uwsgi.ini
you have something like (or its equivalent in the command line):
http-socket=:9597
uwsgi will speak http, and the error mentioned in the question appears. See native HTTP support.
A possible fix is to have instead:
socket=:9597
In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.
Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.
There are many potential causes and solutions for this problem. In my case, the back-end code was taking too long to run. Modifying these variables fixed it for me.
Nginx:
proxy_connect_timeout
, proxy_send_timeout
, proxy_read_timeout
, fastcgi_send_timeout
, fastcgi_read_timeout
, keepalive_timeout
, uwsgi_read_timeout
, uwsgi_send_timeout
, uwsgi_socket_keepalive
.
uWSGI: limit-post
.
I fixed this by reverting to pip3 install uwsgi
.
I was trying out the setup with Ubuntu and Amazon Linux side by side. I initially used a virtual environment and did pip3 install uwsgi
both systems work fine. Later, I did continue the setup with virtual env turned off. On Ubuntu I install using pip3 install uwsgi
and on Amazon Linux yum install uwsgi -y
. That was the source of the problem for me.
Ubuntu works fine, but not the Amazon Linux
The fix,
yum remove uwsgi
and pip3 install uwsgi
restart and it works fine.
This issue can also be caused by a mismatch between timeout values.
I had this issue when nginx had a keepalive_timeout
of 75s, while the upstream server’s value was a few seconds.
This caused the upstream server to close the connection when its timeout was reached, and nginx logged Connection reset by peer
errors.
When having such abrupt "connection closed" errors, please check the upstream timeout values are higher than nginx’ values (see Raphael’s answer for a good list to check)