ini 调整NGINX plus的Linux内核

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ini 调整NGINX plus的Linux内核相关的知识,希望对你有一定的参考价值。


# File Handle Limits,  Socket Tuning and Process Scheduler

Linux kernel settings can be set and read using entries in the /proc filesystem or using sysctl.  Permanent settings that should be applied on boot are defined in sysctl.conf.


Put the tweaks in the file attached into `/etc/sysctl.conf` then run `sysctl -p` to apply them. No need to reboot


## Notes:

 * The following setting increase the recycling time of sockets, avoiding large numbers of them staying in the `TIME_WAIT` status:

```
net.ipv4.tcp_tw_recycle = 1
```

**But this violates RFC and makes trouble against OpenBSD**, and some experts do not recommend this on internet servers! (Use with caution according to the kernel documentation!)


 * You wont need the following setting unless masquerading:
```
net.ipv4.netfilter.ip_conntrack_max = 1048576
```

 * The following modifications caused many reported 500 errors, experts have recommended t removed them:

```
# Disable TCP SACK (TCP Selective Acknowledgement), DSACK (duplicate TCP SACK), and FACK (Forward Acknowledgement)
# SACK requires enabling tcp_timestamps and adds some packet overhead
# Only advised in cases of packet loss on the network
net.ipv4.tcp_sack = 0
net.ipv4.tcp_dsack = 0
net.ipv4.tcp_fack = 0
 
# Disable TCP timestamps
# Can have a performance overhead and is only advised in cases where sack is needed (see tcp_sack)
net.ipv4.tcp_timestamps=0
```

# Increasing number of processes that can be run by a user in Linux

This compliments the `sys.fs.file_max` setting in `/etc/sysctl.conf` – The system‑wide limit for file descriptors

By default, most of the Linux distros limit the number of processes that a user can spawn. This is put in place to limit (un)intended cases when a process might just fork off processes without a limit and bring down a server.

For RHEL (and CentOS), the default is 1024 processes per user. In some cases, you do need to increase the number of processes that a particular user can spawn. For example if you are running a database or an application server,you definitely want to tweak this number because these apps tend to create a lot of threads.

Check the current limits on the number of processes a user can run by executing `ulimit -u`

Edit the `/etc/security/limits.conf` file and add the required limits:


This is per login in, so logging out and back in is all that is required to update your limits.

Now log out and try `ulimit -n`


# Filesystem Tuning

You almost certainly want to disable the "atime" option on your filesystems.

With this disabled that the last time a file was accessed won't be constantly updated every time you read a file, since this information isn't generally useful inand causes extra disk hits, its typically disabled.

To do this, just edit `/etc/fstab` and add "`noatime`" as a mount option for the filesystem. For example:
```
   /dev/rd/c0d0p3          /test                    ext3    noatime             1       2
   UUID=[...] /webserver    ext4    defaults,noexec,nodev,nosuid,noatime        0       2
```

For the changes to take effect reboot the server or remount the partition:
```bash
mount -o remount /webserver
```
#/etc/sysctl.conf
## Increase max number of PIDs ##

#Specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels (<=2.4). The value in this file can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million).
kernel.pid_max = 4194303

## File Handle Limits ##

# Increase number of max open-files
fs.file-max = 150000


## Ephemeral Ports ##

# Increase range of ports that can be used
net.ipv4.ip_local_port_range = 1024 60999

## Process Scheduler ##
 
# https://tweaked.io/guide/kernel/
# Forking servers, like PostgreSQL or Apache, scale to much higher levels of concurrent connections if this is made larger
kernel.sched_migration_cost_ns=5000000

## Socket Tuning / General gigbit tuning ##

# https://tweaked.io/guide/kernel/
# Another parameter that can dramatically impact forking servers is sched_autogroup_enabled. Various PostgreSQL users have reported (on the postgresql performance mailing list) gains up to 30% on highly concurrent workloads on multi-core systems. This setting groups tasks by TTY, to improve perceived responsiveness on an interactive system. On a server with a long running forking daemon, this will tend to keep child processes from migrating away as soon as they should. It can be disabled like so:
kernel.sched_autogroup_enabled = 0
 
# https://github.com/ton31337/tools/wiki/tcp_slow_start_after_idle---tcp_no_metrics_save-performance
# Avoid falling back to slow start after a connection goes idle
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save=0
 
# https://github.com/ton31337/tools/wiki/Is-net.ipv4.tcp_abort_on_overflow-good-or-not%3F
net.ipv4.tcp_abort_on_overflow=0
 

# Scale TCP Window
# Enable TCP window scaling (enabled by default)
# https://en.wikipedia.org/wiki/TCP_window_scale_option
# is an option to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes. This TCP option, along with several others, is defined in IETF RFC 1323 which deals with long fat networks. It can be defined with the following setting:
net.ipv4.tcp_window_scaling=1
 
# Enables fast recycling of TIME_WAIT sockets.
# (Use with caution according to the kernel documentation!)
net.ipv4.tcp_tw_recycle = 1
 
# Allow reuse of sockets in TIME_WAIT state for new connections
# only when it is safe from the network stack’s perspective.
net.ipv4.tcp_tw_reuse = 1
 
# Turn on SYN-flood protections
net.ipv4.tcp_syncookies=1
 
# Only retry creating TCP connections twice
# Minimize the time it takes for a connection attempt to fail
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_orphan_retries=2
 
# How many retries TCP makes on data segments (default 15)
# Some guides suggest to reduce this value
net.ipv4.tcp_retries=8
 
# Optimize connection queues
# https://www.linode.com/docs/web-servers/nginx/configure-nginx-for-optimized-performance
# The rate at which packets are buffered by the network card before being handed off to the CPU. Increasing the value can improve performance on machines with a high amount of bandwidth. Check the kernel log for errors related to this setting, and consult the network card documentation for advice on changing it.
net.core.netdev_max_backlog = 3240000

# The Backlog/Connection Queue
# The maximum number of connections that can be queued for acceptance by NGINX. The default is often very low and that’s usually acceptable because NGINX accepts connections very quickly, but it can be worth increasing it if your website experiences heavy traffic. If error messages in the kernel log indicate that the value is too small, increase it until the errors stop.
# The maximum number of "backlogged sockets".  Default is 128.

# Max number of "backlogged sockets" (connection requests that can be queued for any given listening socket)
net.core.somaxconn = 50000

# Increase max number of sockets allowed in TIME_WAIT
net.ipv4.tcp_max_tw_buckets = 1440000

# Number of packets to keep in the backlog before the kernel starts dropping them
# A sane value is net.ipv4.tcp_max_syn_backlog = 3240000
net.ipv4.tcp_max_syn_backlog = 3240000 
 
## TCP memory tuning ##
# View memory TCP actually uses with: cat /proc/net/sockstat
# *** These values are auto-created based on your server specs ***
# *** Edit these parameters with caution because they will use more RAM ***
# Changes suggested by IBM on https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Welcome%20to%20High%20Performance%20Computing%20%28HPC%29%20Central/page/Linux%20System%20Tuning%20Recommendations

# Increase the default socket buffer read size (rmem_default) and write size (wmem_default)
# *** Maybe recommended only for high-RAM servers? ***
net.core.rmem_default=16777216
net.core.wmem_default=16777216

# Increase the max socket buffer size (optmem_max), max socket buffer read size (rmem_max), max socket buffer write size (wmem_max)
# 16MB per socket - which sounds like a lot, but will virtually never consume that much
# rmem_max over-rides tcp_rmem param, wmem_max over-rides tcp_wmem param and optmem_max over-rides tcp_mem param
net.core.optmem_max=16777216
net.core.rmem_max=16777216
net.core.wmem_max=16777216

# Configure the Min, Pressure, Max values (units are in page size)
# Useful mostly for very high-traffic websites that have a lot of RAM
# Consider that we already set the *_max values to 16777216
# So you may eventually comment these three lines
net.ipv4.tcp_mem=16777216 16777216 16777216
net.ipv4.tcp_wmem=4096 87380 16777216
net.ipv4.tcp_rmem=4096 87380 16777216
 
# Keepalive optimizations
# By default, the keepalive routines wait for two hours (7200 secs) before sending the first keepalive probe,
# and then resend it every 75 seconds. If no ACK response is received for 9 consecutive times, the connection is marked as broken. 
# The default values are: tcp_keepalive_time = 7200, tcp_keepalive_intvl = 75, tcp_keepalive_probes = 9
# We would decrease the default values for tcp_keepalive_* params as follow:
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 9
 
# The TCP FIN timeout belays the amount of time a port must be inactive before it can reused for another connection. 
# The default is often 60 seconds, but can normally be safely reduced to 30 or even 15 seconds
# https://www.linode.com/docs/web-servers/nginx/configure-nginx-for-optimized-performance
net.ipv4.tcp_fin_timeout = 7

# Increase the max conntrack table size. You wont need the following setting unless masquerading
net.netfilter.nf_conntrack_max=524288
# /etc/security/limits.conf

* soft     nproc          131072
* hard     nproc          131072
* soft     nofile         131072
* hard     nofile         131072
root soft     nproc          131072
root hard     nproc          131072
root soft     nofile         131072
root hard     nofile         131072

以上是关于ini 调整NGINX plus的Linux内核的主要内容,如果未能解决你的问题,请参考以下文章

基于Nginx实现10万+并发,你应该做的Linux内核优化

ini Nginx图像过滤器调整代理服务大小

Nginx 实现 10w+ 并发之 Linux 内核优化

Nginx 实现 10w+ 并发之 Linux 内核优化

linux内核优化参数

nginx性能优化及内核参数调整