-
Star
(126)
You must be signed in to star a gist -
Fork
(66)
You must be signed in to fork a gist
-
-
Save kgriffs/4027835 to your computer and use it in GitHub Desktop.
# Configuration file for runtime kernel parameters. | |
# See sysctl.conf(5) for more information. | |
# See also http://www.nateware.com/linux-network-tuning-for-2013.html for | |
# an explanation about some of these parameters, and instructions for | |
# a few other tweaks outside this file. | |
# | |
# See also: https://gist.github.com/kgriffs/4027835 | |
# | |
# Assumes a beefy machine with lots of network bandwidth | |
# Protection from SYN flood attack. | |
net.ipv4.tcp_syncookies = 1 | |
# See evil packets in your logs. | |
net.ipv4.conf.all.log_martians = 1 | |
# Enable source validation by reversed path, as specified in RFC1812 | |
net.ipv4.conf.all.rp_filter = 1 | |
# Ignore all ICMP ECHO and TIMESTAMP requests sent to it via broadcast/multicast | |
net.ipv4.icmp_echo_ignore_broadcasts = 1 | |
net.ipv4.icmp_ignore_bogus_error_responses = 1 | |
# Discourage Linux from swapping idle server processes to disk (default = 60) | |
vm.swappiness = 1 | |
# Be less aggressive about reclaiming cached directory and inode objects | |
# in order to improve filesystem performance. | |
vm.vfs_cache_pressure = 50 | |
# Tweak how the flow of kernel messages is throttled. | |
#kernel.printk_ratelimit_burst = 10 | |
#kernel.printk_ratelimit = 5 | |
# -------------------------------------------------------------------- | |
# The following allow the server to handle lots of connection requests | |
# -------------------------------------------------------------------- | |
# Increase number of incoming connections that can queue up | |
# before dropping | |
net.core.somaxconn = 5000 | |
# Handle SYN floods and large numbers of valid HTTPS connections | |
net.ipv4.tcp_max_syn_backlog = 3000 | |
# Increase the length of the network device input queue | |
net.core.netdev_max_backlog = 5000 | |
# Increase system file descriptor limit so we will (probably) | |
# never run out under lots of concurrent requests. | |
# (Per-process limit is set in /etc/security/limits.conf) | |
fs.file-max = 184028 | |
# Widen the port range used for outgoing connections | |
net.ipv4.ip_local_port_range = 10000 65000 | |
# If your servers talk UDP, also up these limits | |
#net.ipv4.udp_rmem_min = 8192 | |
#net.ipv4.udp_wmem_min = 8192 | |
# -------------------------------------------------------------------- | |
# The following help the server efficiently pipe large amounts of data | |
# -------------------------------------------------------------------- | |
# Disable source routing and redirects | |
net.ipv4.conf.all.send_redirects = 0 | |
net.ipv4.conf.all.accept_redirects = 0 | |
net.ipv4.conf.all.accept_source_route = 0 | |
net.ipv4.conf.all.secure_redirects = 0 | |
# Disable packet forwarding. | |
net.ipv4.ip_forward = 0 | |
net.ipv6.conf.all.forwarding = 0 | |
# Disable TCP slow start on idle connections | |
net.ipv4.tcp_slow_start_after_idle = 0 | |
# Increase Linux autotuning TCP buffer limits | |
# Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE | |
# Don't set tcp_mem itself! Let the kernel scale it based on RAM. | |
net.core.rmem_max = 16777216 | |
net.core.wmem_max = 16777216 | |
net.core.rmem_default = 16777216 | |
net.core.wmem_default = 16777216 | |
net.core.optmem_max = 40960 | |
net.ipv4.tcp_rmem = 4096 87380 16777216 | |
net.ipv4.tcp_wmem = 4096 65536 16777216 | |
# Enable BBR; requires Linux kernel version 4.9 or higher | |
net.core.default_qdisc=fq | |
net.ipv4.tcp_congestion_control=bbr | |
# -------------------------------------------------------------------- | |
# The following allow the server to handle lots of connection churn | |
# -------------------------------------------------------------------- | |
# Disconnect dead TCP connections after 1 minute | |
net.ipv4.tcp_keepalive_time = 60 | |
# Wait a maximum of 5 * 2 = 10 seconds in the TIME_WAIT state after a FIN, to handle | |
# any remaining packets in the network. | |
# net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 5 | |
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 5 | |
# Allow a high number of timewait sockets | |
net.ipv4.tcp_max_tw_buckets = 40960 | |
# Timeout broken connections faster (amount of time to wait for FIN) | |
net.ipv4.tcp_fin_timeout = 10 | |
# Let the networking stack reuse TIME_WAIT connections when it thinks it's safe to do so | |
net.ipv4.tcp_tw_reuse = 1 | |
# Determines the wait time between isAlive interval probes (reduce from 75 sec to 15) | |
net.ipv4.tcp_keepalive_intvl = 15 | |
# Determines the number of probes before timing out (reduce from 9 sec to 5 sec) | |
net.ipv4.tcp_keepalive_probes = 5 | |
# -------------------------------------------------------------------- | |
# The following optimize connection setup | |
# -------------------------------------------------------------------- | |
net.ipv4.tcp_fastopen = 3 |
Some articles describe the disadvantages of net.ipv4.tcp_syncookies. I think that only public servers need it . http://serverfault.com/questions/705504/better-alternative-for-tcp-syncookies-in-linux
Ubuntu 10.04 has default "sysctl.d/10-network-security.conf" setting below:
# Turn on SYN-flood protections. Starting with 2.6.26, there is no loss
# of TCP functionality/features under normal conditions. When flood
# protections kick in under high unanswered-SYN load, the system
# should remain more stable, with a trade off of some loss of TCP
# functionality/features (e.g. TCP Window scaling).
net.ipv4.tcp_syncookies=1
What about slow start (IPv6)and initcwnd for IPv4 and IPv6?
/Edit
In RHEL and Centos default the initcwnd is 10
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.2_Release_Notes/networking.html
To check: ss -nli|fgrep cwnd
be careful with these settings
@kgriffs I have a issues when I manually apply this by sysctl -p /etc/sysctl.d/zz-user.conf
, all running docker swram services in this server cannot be accessed, and the server must be restarted to make it get back to work.
The performance-oriented settings assume your box is capable of serving enough traffic to hit up against the defaults, so they are not going to be terribly useful unless you have, say, 4 or more cores and 4 GB or more RAM, plus enough network bandwidth to drive a heavy load.