I'm using nginx on the frontend as "proxy cache" and apache on the backend, i've set my PHP settings to the following:

error_log = /var/www/site1/php_error.log
error_reporting = 22527
file_uploads = On
log_errors = On
max_execution_time = 0
max_file_uploads = 20
max_input_time = -1
memory_limit = 512M
post_max_size = 0
upload_max_filesize = 1000M

What's the problem? Uploading files less than 1MB is successful but anything greater than that, Google Chrome outputs:

Error 101 (net::ERR_CONNECTION_RESET): The connection was reset.

I already checked for the error log file but it doesn't exist in the directory. I also checked /var/log/httpd/error_log but no uploading related problems. I don't know anything else which might have caused the problem so I have reached out for your helping hand. Thanks!

shareimprove this questionedited Sep 5 '12 at 10:44Fedir RYKHTIK364516asked Jun 24 '12 at 7:38Michelle5652618

   Did you mean greater than 1GB? Take a look at your question's title! – Ilia Rostovtsev Apr 16 '14 at 20:34    uh... I meant what I meant. – Michelle Apr 17 '14 at 1:07   hmm.. alright, sorry. You just had setup PHP upload_max_filesize = 1000M and I wouldn't think that 1MBuploads could ever fail! Just wanted to double-check. – Ilia Rostovtsev Apr 17 '14 at 6:27

2 Answers 正确答案

I discovered the problem, The problem was with 

http {
    client_max_body_size 0;
}

I set client_max_body_size to 0. the default was 1M.

shareimprove this answeranswered Jun 24 '12 at 8:20Michelle5652618

   Thanks. Small explanaition : in the doc of nginx it's said "If the stated content length is greater than this size, then the client receives the HTTP error code 413 ("Request Entity Too Large"). It should be noted that web browsers do not usually know how to properly display such an HTTP error." wiki.nginx.org/HttpCoreModuleThat's why, probably, the error message is not precise. – Fedir RYKHTIK Sep 5 '12 at 10:01    I might be late to the party but is there something similar that would work for Apache? – henrywright Apr 30 '14 at 16:37   Life safer. Fantastic, thank you – Adam K Dean Oct 12 '16 at 14:33

Why is post_max_size = 0 ?

It should be at least 1000M in your case, since most uploads are POST requests.

shareimprove this answeranswered Jun 24 '12 at 7:49Paul Basov1235

4 isn't it that 0 is unlimited? – Michelle Jun 24 '12 at 7:57

来自  https://serverfault.com/questions/401729/uploading-files-greater-than-1mb-connection-resets


nginx upload client_max_body_size issue

I'm running nginx/ruby-on-rails and I have a simple multipart form to upload files.Everything works fine until I decide to restrict the maximum size of files I want uploaded. To do that, I set the nginxclient_max_body_sizeto1m(1MB) and expect a HTTP 413 (Request Entity Too Large) status in response when that rule breaks.

The problemis that when I upload a 1.2 MB file, instead of displaying the HTTP 413 error page, the browser hangs a bit and then dies with a "Connection was reset while the page was loading" message.

I've tried just about every option there is that nginx offers, nothing seems to work. Does anyone have any ideas about this?

Here's my nginx.conf:

worker_processes  1;timer_resolution  1000ms;events {    worker_connections  1024;}http {    passenger_root /the_passenger_root;    passenger_ruby /the_ruby;    include       mime.types;    default_type  application/octet-stream;    sendfile           on;    keepalive_timeout  65;    server {      listen 80;      server_name www.x.com;      client_max_body_size 1M;      passenger_use_global_queue on;      root /the_root;      passenger_enabled on;      error_page 404 /404.html;      error_page 413 /413.html;        }    }

Thanks.


**Edit**

Environment/UA: Windows XP/Firefox 3.6.13

shareimprove this questioneditedFeb 10 '11 at 11:32askedFeb 9 '11 at 15:46krukid1,16321423


3 Answers 

正确答案



up vote105down voteaccepted

nginx "fails fast" when the client informs it that it's going to send a body larger than theclient_max_body_sizeby sending a 413 response and closing the connection.

Most clients don't read responses until the entire request body is sent. Because nginx closes the connection, the client sends data to the closed socket, causing a TCP RST.

If your HTTP client supports it, the best way to handle this is to send anExpect: 100-Continueheader. Nginx supports this correctly as of 1.2.7, and will reply with a413 Request Entity Too Largeresponse rather than100 ContinueifContent-Lengthexceeds the maximum body size.

shareimprove this answereditedJun 4 '15 at 20:47answeredNov 16 '12 at 21:09Joe Shaw12.4k145479

1Oh, I should point out that this answer assumes that the client is sendingContent-Lengthrather than doingTransfer-Encoding: chunked.–Joe ShawNov 16 '12 at 21:102The nginx author posted a patch to fix this on the mailing list:nginx.2469901.n2.nabble.com/…No word whether it will be added to the 1.2.x stable branch, though.–Joe ShawNov 19 '12 at 18:14Thanks, that actually explains a lot. Certainly looks likeExpectis the way to go for large requests.–krukidJan 21 '13 at 15:55Updated my answer to note that the patch I mentioned earlier was committed and incorporated into the 1.2.7 release.–Joe ShawJan 10 '14 at 17:50Just to save the time of looking for a nice syntax (like I spent):request.setHeader(HttpHeaders.EXPECT, CONTINUE);withimport org.apache.http.HttpHeaders;andimport static org.jboss.netty.handler.codec.http.HttpHeaders.Values.CONTIN‌UE;–Erez CohenJan 17 '17 at 13:55

Does your upload die at the very end? 99% before crashing? Client body and buffers are key because nginx must buffer incoming data. The body configs (data of the request body) specify how nginx handles the bulk flow of binary data from multi-part-form clients into your app's logic.

Thecleansetting frees up memory and consumption limits by instructing nginx to store incoming buffer in a file and then clean this file later from disk by deleting it.

Setbody_in_file_onlytocleanand adjust buffers for theclient_max_body_size. The original question's config already had sendfile on, increase timeouts too. I use the settings below to fix this, appropriate across your local config, server, & http contexts.

client_body_in_file_only clean;client_body_buffer_size 32K;client_max_body_size 300M;sendfile on;send_timeout 300s;
shareimprove this answereditedOct 12 '16 at 10:11Shashank Agrawal13.5k73359answeredJan 23 '13 at 8:52Bent Cardan3,44221317

Even if this does result in nginx returning a proper HTTP 413, UA will still end up sending the entirety of request body, will it not? In that case I think it's worth trying the approach @joe-shaw suggested.–krukidFeb 5 '13 at 12:17@krukid when it looks we got 99% upload complete before NGINX "fail fast," I agree with you. In this case, all signs are positive surrounding the request object, i.e. diagnosis is that internal server application logic is fine--whatever runs behind Nginx. So while it's likely the request has been well formed, our need is then to consider why NGINX chokes on response. client_max_body_size should be the first config option we look at, then consider the buffers, because with a large enough upload the correct solution depends on how much memory our server can handle as well.–Bent CardanFeb 11 '13 at 17:26@Bent Cardan. This approach seemed to be the better one and I tried it. But I'm still getting a 413 error after about 20 seconds for a 4mb file. My upspeeds cannot manage 4 MB in 20secs so it is happening after data has been flowing for quite a bit. Thoughts?–JeromeOct 19 '14 at 16:46I have added the changes in nginx.conf file _client_max_body_size 300M; sendfile on;send_timeout 300s; _ Its working perfectly for me Thanks–Ramesh ChandMar 1 '16 at 8:14Solution works for me onopenshiftphp7nginx.–marloSep 28 '16 at 3:56

Fromthe documentation:

It is necessary to keep in mind that the browsers do not know how to correctly show this error.

I suspect this is what's happening, if you inspect the HTTP to-and-fro using tools such asFirebugorLive HTTP Headers(both Firefox extensions) you'll be able to see what's really going on.

shareimprove this answeransweredFeb 9 '11 at 17:21ZoFreX6,92532448

1I've come across that here, too:forum.nginx.org/read.php?2,2620Where the nginx author says people could try changing lingering_time/lingering_timeout - both of which had no effect in my case. Besides, I just don't see how there could be a persistent timeout issue when I'm uploading 1.2MB file with a 1MB limit easily having a steady 5Mbps connection. I've sniffed the response and it does send the 413 page with "Connection: close" header, but the connection doesn't seem to close.–krukidFeb 10 '11 at 11:19I guess I just have a hard time believing that even though there is a perfectly valid 413 HTTP status, it doesn't fire in browsers. I've googled a lot of places where people can't get rid of that page, and I never even saw it.–krukidFeb 10 '11 at 11:29If you disable passenger, does it close the connection?–Mark RoseFeb 10 '11 at 15:23Well, I compared the responses with and without passenger. When everything runs normally and I upload a file several times larger (~14MB) than my 1MB restriction I get 413 response multiple times (because client keeps sending chunks) and the final "Connection reset" does look like a timeout. Without passenger I get one 413 instant response and all progress stops, but I still see "Connection reset" page, not my static 413.html or anything that implies "Entity Too Large"–krukidFeb 10 '11 at 17:00

来自 https://stackoverflow.com/questions/4947107/nginx-upload-client-max-body-size-issue