transferring big files to a limited memory server


I want to have my webservice accept large file transfers from customers. To do this, I am planning to use nginx over tornado to take care of limited memory at the server side during file upload. Is this a good plan? Or should I use some other framework/protocol to transfer large file from a user to my server.


Tornado needs some work before it can stream very large uploads, see issue 231. I’d suggest Nginx’s HttpUpload module: Nginx uploads user files into server-side temp files, then notifies your application so you can decide what to do with the file.

F*EX needs only a few MBs of memory on server side, see:

You can install it on any UNIX platform and your users just need a webbrowser.

With F*EX you can send/receive files of ANY size.

Answered By: Framstag
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.