Some optimizations for nginx

1. Compile and Install Optimizations

1. Cancel debug mode: When compiling Nginx, it defaults to debug mode, which inserts a lot of information such as tracing and ASSEERT. After compilation, a Nginx has several megabytes.Remove debug mode from Nginx before compilation. Nginx only has a few hundred kilobytes after compilation, so you can modify the source code to remove debug mode before Compilation

The specific methods are as follows:

Modify the file after decompressing the source code

[root@diaodu-0001 nginx-1.12.2]# vim auto/cc/gcc //comment or delete the following two lines to cancel debug mode

# debug
CFLAGS="$CFLAGS -g"

2. Hide version information:

[root@diaodu-0001 nginx-1.12.2]# vim src/http/ngx_http_header_filter_module.c +48 //Modify source file

static u_char ngx_http_server_string[] = "Server: nginx" CRLF;
static u_char ngx_http_server_full_string[] = "Server: " NGINX_VER CRLF;
static u_char ngx_http_server_build_string[] = "Server: " NGINX_VER_BUILD CRLF;

Examples of modified effects:

static u_char ngx_http_server_string[] = "Server: Jacob" CRLF;
static u_char ngx_http_server_full_string[] = "Server: Jacob" CRLF;
static u_char ngx_http_server_build_string[] = "Server: Jacob" CRLF;

Use curl-i access after modification to view server information changes that are headers

2. Optimizing basic configuration:

#Header Configuration
user  nginx nginx;    #Define the startup user for nginx, root is not recommended
worker_processes  4;  #Number of cores positioned as cpu, since my environment configuration is 4 cores, write 4.However, there is no point in getting up to 8 or more of this value, and only one of the following configurations can be used for further performance improvement
worker_cpu_affinity 0001 0010 0100 1000;  #This configuration is designed to turn on multi-core CPU s. It helps you to improve performance first. nginx is not turned on by default, 1 is on and 0 is off, so turn on the first upside-down writing first.
//First 0001 (turn off the fourth, turn off the third, turn off the second, turn on the first)
//Second 0010 (turn off the fourth, turn off the third, turn on the second, turn off the first)
//Third 0100 (turn off fourth, turn on third, turn off second, turn off first)
//Following the series of analogies, you should be able to understand the intelligence quotient, right?If it is a 16-core or 8-core cpu, then note that it is 00000001, 00000010, 00000100, and the total number of digits is the same as the number of CPU cores.
 
error_log  /data/logs/nginx/error.log crit;      #These two essentially don't need me to say
pid        /usr/local/nginx/nginx.pid;
 
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 65535;    #This value is the maximum number of files opened by the worker process for nginx. If not configured, the server kernel parameters (viewed through ulimit-a) will be read. If the value of the kernel is set too low, nginx will error (too many open)
file),However, after this setting, you will read your own configured parameters instead of the kernel parameters
 
events
{
  use epoll;    #Client thread polling method, recommended use of epoll for kernel version 2.6 or above
  worker_connections 65535;  #Set the maximum number of connections a worker can open
}
http {
        include       mime.types;
        default_type  application/octet-stream;
 
        #charset  gb2312;
        server_tokens  off;    #For nginx version information on error pages, it is recommended to turn off to improve security
 
        server_names_hash_bucket_size 128;
        client_header_buffer_size 32k;      # Cache of default request header information
        large_client_header_buffers 4 32k;      #Maximum number and capacity of caches requesting packet header information
        client_max_body_size 8m;
 
        sendfile on;      #Open the sendfile () function so that sendfile can copy data from disk to tcp socket.
        tcp_nopush     on;  #Tell nginx to send all headers in the package, not one at a time
 
        #keepalive_timeout 15;
        keepalive_timeout 120;
 
        tcp_nodelay on;
 
        proxy_intercept_errors on;
        fastcgi_intercept_errors on;
        fastcgi_connect_timeout 1300;
        fastcgi_send_timeout 1300;
        fastcgi_read_timeout 1300;
        fastcgi_buffer_size 512k;
        fastcgi_buffers 4 512k;
        fastcgi_busy_buffers_size 512k;
        fastcgi_temp_file_write_size 512k;
 
        proxy_connect_timeout      20s;
        proxy_send_timeout         30s;
        proxy_read_timeout         30s;
 
 
 
        gzip on;            #Gzip is telling nginx to transfer files using gzip data, which will greatly reduce the amount of data we send
        gzip_min_length  1k;
        gzip_buffers     4 16k;
        gzip_http_version 1.0;
        gzip_comp_level 2;
        gzip_types       text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
        gzip_vary on;
        gzip_disable msie6;
        #limit_zone  crawler  $binary_remote_addr  10m;
 
log_format  main  '$http_host $remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for" '
                  '$request_time $upstream_response_time';
 
 #The paths specified by proxy_temp_path and proxy_cache_path must be on the same partition because they are hard-linked
 #proxy_temp_path /var/cache/nginx/proxy_temp_dir;
 #Set the Web cache name to cache_one, the size of the memory cache is 200 MB, content that has not been accessed in one day is automatically cleared, and the size of the hard disk cache is 30 GB.
 #proxy_cache_path /var/cache/nginx/proxy_cache_dir levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;
 
        include /usr/local/nginx/conf/vhosts/*.conf;
 
        error_page  404   = https://www.niu.com/404/;
        #error_page   500 502 503 504 = http://service.niu.com/alien/;
 
 }

3. Optimizing the high concurrency architecture:

Append configuration to: /etc/sysctl.conf file Execute command sysctl-p takes effect

The configuration is as follows:

#Maximum number of packets allowed to send queues when each network interface receives packets faster than the kernel processes
net.core.netdev_max_backlog = 262144

#Number of tcp connections initiated simultaneously by the regulatory system
net.core.somaxconn = 262144

#This parameter is used to set the maximum number of TCP sockets allowed in the system that are not associated with any user file handle, primarily to prevent Ddos attacks
net.ipv4.tcp_max_orphans = 262144

#This parameter is used to record the maximum number of connection requests that have not received client confirmation information
net.ipv4.tcp_max_syn_backlog = 262144

#Recommended shutdown on nginx service (0)
net.ipv4.tcp_timestamps = 0

#This parameter is used to set the number of SYN+ACK packets sent to the client before the kernel discards the TCP connection. In order to establish the connection service to the client, the server and the client need to shake hands three times. During the second handshake, the kernel needs to send a SYN with an ACK that responds to the previous SYN. This parameter
//Number mainly affects this process, typically assigned a value of 1, which means the SYN+ACK package is sent once before the kernel discards the connection.
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1

4. Configuring Anti-Crawlers

Reference Blog: https://www.centos.bz/2018/01/nginx%E6%94%AF%E6%8C%81https%E5%B9%B6%E4%B8%94%E6%94%AF%E6%8C%81%E5%8F%8D%E7%88%AC%E8%99%AB/

#The following is added to the nginx virtual host configuration <br><br>after proxypass
if ($http_user_agent ~* (Scrapy|Curl|HttpClient)) { 
     return 403; 
} 
 
#Prohibit access to specified UA and empty UA 
if ($http_user_agent ~ "WinHttp|WebZIP|FetchURL|node-superagent|java/|FeedDemon|Jullo|JikeSpider|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|Java|Feedly|Apache-HttpAsyncClient|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms|BOT/0.1|YandexBot|FlightDeckReports|Linguee Bot|^$" ) { 
     return 403;              
} 
 
#Prohibit non-GET|HEAD|POST grabbing 
if ($request_method !~ ^(GET|HEAD|POST)$) { 
    return 403; 
} 

V. Limit IP Concurrency

A DDOS attacker can send a large number of concurrent connections, consuming server resources (including number of connections, bandwidth, and so on), which can leave normal users waiting or unable to access the server.
Nginx provides a The ngx_http_limit_req_module can effectively reduce the risk of DDOS attacks by:
[root@diaodu-0001 ~]# vim /usr/local/nginx/conf/nginx.conf
... ...
http{
... ...
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
    server {
        listen 80;
        server_name localhost;
        limit_req zone=one burst=5;
    }
}
The //limit_req_zone syntax format is as follows:
//limit_req_zone key zone=name:size rate=rate;
//In the above case, shared memory with client IP information store name one has memory space of 10M
//1M can store 8,000 IP information, 10M can store the status of 80,000 host connections, capacity can be adjusted as needed
//Accept only one request per second, put extra into funnel
//More than 5 funnels error

 

6. Define the cache time of static pages in client browsers

[root @proxy ~]# vim /usr/local/nginx/conf/nginx.conf
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
expires        30d;            //Define client cache time of 30 days
}
}

 

Good articles: https://www.jianshu.com/p/5d6bd48b4c2f

Tags: Operation & Maintenance Nginx vim curl Javascript

Posted on Thu, 23 Apr 2020 10:01:20 -0700 by RyanSF07