By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk.
This process is outlined at the Nginx ngx_http_fastcgi_module page manual page.
Since disk is slow and memory is fast the aim is to get as many FastCGI responses passing only through memory. On the flip side we don't want to set an excessively large buffer as they are created and sized on a per request basis - it's not shared.
The related Nginx options are:
-
fastcgi_buffering
first appeared in Nginx 1.5.6 (1.6.0 stable) and can be used to turn buffering completely on/off. It's on by default. -
fastcgi_buffer_size
is a special buffer space used to hold the first part of the FastCGI response, which is going to be the HTTP response headers.You shouldn't need to adjust this from the default - even if Nginx defaults to the smallest page size of 4KB (your platform will determine if
4/8k
buffer) it should fit your typical HTTP header.The one exception I have seen are frameworks that push large amounts of cookie data via the
Set-Cookie
HTTP header during their user verification/login phase - blowing out the buffer and resulting in a HTTP 500 error. In those instances you will need to increase this buffer to8k/16k/32k
to fully accommodate your largest upstream HTTP header being pushed. -
fastcgi_buffers
controls the number and memory size of buffer segments used for the payload of the FastCGI response. Most, if not all of our tweaking will be around this setting and forms the remainder of this page.
By grepping our Nginx access logs we can determine both maximum and average response sizes. The basis of this awk
recipe was lifted from here:
$ awk '($9 ~ /200/) { i++;sum+=$10;max=$10>max?$10:max; } END { printf("Maximum: %d\nAverage: %d\n",max,i?sum/i:0); }' access.log
# Maximum: 76716
# Average: 10358
Note: these recipes are going to report on all access requests returning an HTTP 200 code, you might want to split out just FastCGI requests into a separate Nginx access log for reporting, like so (PHP-FPM here):
location ~ "\.php$" {
fastcgi_index index.php;
if (!-f $realpath_root$fastcgi_script_name) {
return 404;
}
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/run/php5/php-fpm.sock;
# output just FastCGI requests to it's own Nginx log file
access_log /var/log/nginx/phpfpmonly-access.log;
}
With these values in hand we are now much better equipped to set fastcgi_buffers
.
The fastcgi_buffers
setting takes two values, buffer segment count and memory size, by default it will be:
fastcgi_buffers 8 4k|8k;
So a total of 8 buffer segments at either 4k/8k
, which is determined by the platform memory page size. For Debian/Ubuntu Linux that turns out to be 4096
bytes (4K) - so a default total buffer size of 32KB.
Based on the maximum/average response sizes determined above we can now raise/lower these values to suit. I typically keep buffer size at the default (memory page size) and adjust only the buffer segment count to a value for keep the bulk/all responses handled fully in buffer RAM. The default memory page size (in bytes) can be determined by the following command:
$ getconf PAGESIZE
If you response size average tips on the higher side you might want to alternatively lower the buffer segment count and raise the memory size in page size multiples (8k/16k/32k
).
We can see how often FastCGI responses are being saved to disk by grepping our Nginx error log(s):
$ cat error.log | grep -E "\[warn\].+buffered"
# will return lines like:
YYYY/MM/DD HH:MM:SS [warn] 1234#0: *123456 an upstream response is buffered to a temporary file...
Remember its not necessarily a bad situation to have some larger responses buffered to disk - aim for a balance where only a small portion of your largest responses are handled in this way.
The alternative of ramping up fastcgi_buffers
to excessive number and/or size values to fit all FastCGI responses purely in RAM is something I would strongly recommend against, as unless your Nginx server is only receiving a few concurrent requests at any one time - you risk exhausting your available system memory.