⌘T | go to file |
⌘⌃P | go to project |
⌘R | go to methods |
⌃G | go to line |
⌘KB | toggle side bar |
⌘⇧P | command prompt |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
user web; | |
# One worker process per CPU core. | |
worker_processes 8; | |
# Also set | |
# /etc/security/limits.conf | |
# web soft nofile 65535 | |
# web hard nofile 65535 | |
# /etc/default/nginx |
On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:
2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files) 2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...
An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.
-
- Instead of using
su
to runulimit
on the nginx account, useps aux | grep nginx
to locate nginx's process IDs. Then query each process's file handle limits usingcat /proc/pid/limits
(wherepid
is the process id retrieved fromps
). (Note:sudo
may be necessary on your system for thecat
command here, depending on your system.)
- Instead of using
- Added
fs.file-max = 70000
to /etc/sysctl.conf - Added `nginx soft nofile 1