Nginx installation, reverse agent implementation and deep optimization

Blog Outline:

1, Installation of Nginx;
2, Nginx service implements reverse proxy;
3, Nginx service optimization
1, Installation of Nginx

For the basic concept of Nginx, in the previous blog post: Detailed explanation of setting up Nginx server and its configuration file There is a detailed introduction, this blog will start directly from the installation.

Environmental preparation:

Three centos 7.5, one running Nginx, the other two running simple web services, mainly used to test the effect of Nginx reverse proxy;
download The package provided by me is required for cache and compression optimization when installing Nginx.

-----------------—

Note (the effect is as follows):

Combining proxy and upstream module to realize the back-end web load balancing;
Using proxy module to realize static file cache;
The health check of back-end server can be realized by combining the default NGX ﹣ http ﹣ proxy ﹣ module and NGX ﹣ http ﹣ upstream ﹣ module of nginx. The third-party module nginx ﹣ upstream ﹣ check ﹣ module can also be used;
Using nginx sticky module extension module to maintain session;
Using NGX cache purge to achieve more powerful cache clearing function;
Using NGX? Brotli module to realize the compression of web page file.

The two modules mentioned above belong to the third-party extension module. You need to lay down the source code in advance (I included these modules in the previous download link), and then install them at compile time through -- add moudle = src_upath.

1. Install Nginx

[root@nginx nginx-1.14.0]# yum -y erase httpd     #Uninstall the system's default httpd service to prevent port conflicts
[root@nginx nginx-1.14.0]# yum -y install openssl-devel pcre-devel    #Installation dependency
[root@nginx src]# rz          #rz command to upload the required source package
[root@nginx src]# ls          #Confirm the uploaded source package
nginx-sticky-module.zip    ngx_brotli.tar.gz
nginx-1.14.0.tar.gz  ngx_cache_purge-2.3.tar.gz
#Decompress the uploaded source package
[root@nginx src]# tar zxf nginx-1.14.0.tar.gz  
[root@nginx src]# unzip nginx-sticky-module.zip 
[root@nginx src]# tar zxf ngx_brotli.tar.gz 
[root@nginx src]# tar zxf ngx_cache_purge-2.3.tar.gz 
[root@nginx src]# cd nginx-1.14.0/        #Switch to nginx directory
[root@nginx nginx-1.14.0]#  ./configure --prefix=/usr/local/nginx1.14 --user=www --group=www --with-http_stub_status_module  --with-http_realip_module  --with-http_ssl_module --with-http_gzip_static_module  --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy  --http-fastcgi-temp-path=/var/tmp/nginx/fcgi --with-pcre  --add-module=/usr/src/ngx_cache_purge-2.3  --with-http_flv_module --add-module=/usr/src/nginx-sticky-module && make && make install
#Compile and install, and use the "- add module" option to load the required modules
#Note that the above does not load the NGX ﹣ brotli module to show how to add the module after the nginx service has been installed

The above compilation options are explained as follows:

--With HTTP? Stub? Status? Module: monitoring the status of nginx through web pages;
--With HTTP > realip > module: get the real IP address of the client;
--With HTTP > SSL > module: enable the encrypted transmission function of nginx;
--With HTTP? Gzip? Static? Module: enable compression;
--HTTP client body temp path = / var / TMP / nginx / client: temporary storage path of client access data (path of cache storage);
--HTTP proxy temp path = / var / TMP / nginx / proxy: the same as above;
--HTTP fastcgi temp path = / var / TMP / nginx / fcgi: the same as above;
--With PCRE: support regular matching expression;
--Add module = / usr / SRC / NGX ﹣ cache ﹣ purge-2.3: add the third-party module of nginx, the syntax is: - add module = path of the third-party module;
--Add module = / usr / SRC / nginx sticky module: the same as above;
--With HTTP? Flv? Module: supports flv video streaming.

2. Start Nginx service

[root@nginx nginx-1.14.0]# ln -s /usr/local/nginx1.14/sbin/nginx /usr/local/sbin/
#Create a soft connection for the nginx command so that you can use it directly
[root@nginx nginx-1.14.0]# useradd -M -s /sbin/nologin www
[root@nginx nginx-1.14.0]# mkdir -p /var/tmp/nginx/client
[root@nginx nginx-1.14.0]# nginx -t      #Check nginx configuration file
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test is successful
[root@nginx nginx-1.14.0]# nginx       #Start nginx service
[root@nginx nginx-1.14.0]# netstat -anpt | grep ":80"    #Check whether port 80 is listening
tcp   0   0 0.0.0.0:80      0.0.0.0:*        LISTEN      7584/nginx: master  

2, Nginx service implements reverse proxy

Before implementing the reverse proxy, let's talk about what is the reverse proxy? What is a forward agent?

1. Forward agent

It is used to proxy the connection request (such as NAT) from the internal network to the Internet. The client specifies the proxy server, and sends the HTTP request that should be sent directly to the target Web server to the proxy server first, then the proxy server accesses the Web server, and sends back the information returned by the Web server to the client. At this time, the proxy server is the forward proxy.

2. Reverse proxy

In contrast to the forward proxy, if the LAN provides resources to the Internet and allows other users on the Internet to access the resources in the LAN, you can also set up a proxy server, which provides the reverse proxy. The reverse proxy server accepts the connection from the Internet, then forwards the request to the server on the internal network, and sends back the return information of the web server to
A client on the Internet that requests a connection.

All in all: the object of the forward proxy is the client, instead of the client to access the web server; the object of the reverse proxy is the web server, and the proxy web server responds to the client.

3. Nginx configure reverse agent

Nginx can be configured as a reverse proxy and load balancing, and static pages can be cached in nginx by using its caching function to reduce the number of back-end server connections and check the health of the back-end web server.

The environment is as follows:

  • One Nginx server as reverse agent;
  • Two back-end web servers form a web server pool;
  • When the client accesses the Nginx proxy server, it can refresh the page many times to get the pages returned by different back-end web servers.

Start to configure Nginx server:

[root@nginx ~]# cd /usr/local/nginx1.14/conf/      #Switch to the specified directory
[root@nginx conf]# vim nginx.conf           #Edit Master profile
             ........................#Omit part of the content
http{
             ........................#Omit part of the content
upstream backend {
        sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }
            ........................#Omit part of the content
server {
location / {
            #root   html;                            #Comment out the original root directory 
            #index  index.html index.htm;        #Comment out and change line
            proxy_pass http://Backend; ා the "backend" specified here must correspond to the web pool name above.
        }
   }
}
#After editing, save to exit.
[root@nginx conf]# nginx -t            #Check the configuration file and make sure it is correct
[root@nginx conf]# nginx -s reload        #Restart nginx service to take effect

In the configuration of the above web server pool, there is a "sticky" configuration item. In fact, the nginx sticky module is loaded. The function of this module is to send requests from the same client (browser) to the same back-end server for processing through cookie pasting, which can solve the problem of session synchronization of multiple back-end servers to some extent Session synchronization is just like that when visiting a page, you can log in once, and you don't need to log in again in a certain period of time, which is the concept of session). In RR polling mode, the operation and maintenance personnel must consider the realization of session synchronization. In addition, the built-in IP hash can also distribute requests according to the client IP, but it is easy to cause load imbalance. If there is access from the same LAN in front of nginx, the client IP it receives is the same, which is easy to cause load imbalance. The cookie expiration time of nginx sticky module will expire when the browser is closed by default.
This module is not suitable for browsers that do not support cookies or manually disable cookies. At this time, the default sticky will switch to RR. It cannot be used at the same time as IP hash.

sticky is just one of the scheduling algorithms supported by Nginx. The following are other scheduling algorithms supported by Nginx's load balancing module:

  • Polling (default, RR): each request is allocated to different back-end servers one by one in chronological order. If a back-end server goes down, the failed system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight. The larger the weight value is, the higher the access probability is allocated. It is mainly used in the case of uneven performance of each server in the backend.
  • ip_hash: each request is allocated according to the hash result of the access IP, so that visitors from the same IP can visit a back-end server, which effectively solves the session sharing problem of dynamic web pages. Of course, if this node is unavailable, it will be sent to the next node, and if there is no session synchronization at this time, it will be logged out.
  • least_conn: the request is sent to the realserver with the least active connections. The value of weight will be considered.
  • url_hash: this method allocates requests according to the hash results of accessing URLs, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx does not support url hash. If you need to use this scheduling algorithm, you must install nginx's hash package nginx upstream hash.
  • Fair: This is a more intelligent load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the size of the page and the length of the loading time, that is to say, allocate the request according to the response time of the back-end server, and give priority to the allocation with short response time. Nginx itself does not support fair. If you need to use this scheduling algorithm, you must download nginx's upstream & Fair module.

-----------------

About the configuration explanation after the IP address of the web server in the web pool in the above configuration file:

  • Weight: polling weight can also be used in IP hash. The default value is 1;
  • Max_failures: the number of times the request is allowed to fail. The default is 1. When the maximum number of times is exceeded, an error defined by the proxy > next > upstream module is returned.
  • fail_timeout: there are two meanings: one is to allow up to two failures in 10s; the other is to not allocate requests to this server in 10s after two failures.

The server configuration in the web server pool is as follows (for reference only, here is a simple httpd service for testing):

[root@web01 ~]# yum -y install httpd            #Install httpd service
[root@web01 ~]# echo "192.168.20.2" > /var/www/html/index.html  #Two web servers prepare different web files
[root@web01 ~]# systemctl start httpd      #Start web Service

The second web server can do the same operation above, just pay attention to prepare different web files, so as to test the effect of load balancing.

Client access verification is now possible, but it should be noted that the nginx proxy server must be able to communicate with two wbe servers.

Access your own test on nginx proxy server (you can see that you are polling the web server in the web server pool):

If Windows client is used for access test, because there is "sticky" configuration in the configuration file, each refresh request will be transferred to the same web server, and the effect of load balancing cannot be tested. Just comment out the "sticky" line to test the effect of load balancing.

3, Nginx service optimization

The so-called optimization, in addition to controlling its working thread, has several more important concepts, that is, caching and web page compression. Because there are many configuration items involved, I will write the complete configuration file of http {} field below, and comment. At the end of the blog, there will be an http {} field without comment.

Before optimization, it seems that when compiling and installing Nginx, I intentionally missed a module that was not loaded, just to show how to load if the required module was not loaded?

The configuration is as follows:

[root@nginx conf]# cd /usr/src/nginx-1.14.0/     #Switch to Nginx source package
[root@nginx nginx-1.14.0]# nginx -V    #Execute "Nginx -V" to view the loaded modules
nginx version: nginx/1.14.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) 
built with OpenSSL 1.0.2k-fips  26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx1.14 --user=www --group=www --with-http_stub_status_module --with-http_realip_module --with-http_ssl_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fcgi --with-pcre --add-module=/usr/src/ngx_cache_purge-2.3 --with-http_flv_module --add-module=/usr/src/nginx-sticky-module
[root@nginx nginx-1.14.0]# ./configure --prefix=/usr/local/nginx1.14 --user=www --group=www --with-http_stub_status_module --with-http_realip_module --with-http_ssl_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fcgi --with-pcre --add-module=/usr/src/ngx_cache_purge-2.3 --with-http_flv_module --add-module=/usr/src/nginx-sticky-module --add-module=/usr/src/ngx_brotli && make
#Copy the loaded modules found above, recompile the following, and add the modules to be added
#For example, I added the third-party module "- add module = / usr / SRC / ngx_brotli" above
[root@nginx nginx-1.14.0]# mv /usr/local/nginx1.14/sbin/nginx /usr/local/nginx1.14/sbin/nginx.bak
#Change the name of the original Nginx control file and make a backup
[root@nginx nginx-1.14.0]# cp objs/nginx /usr/local/nginx1.14/sbin/    
#Move the newly generated Nginx command to the corresponding directory
[root@nginx nginx-1.14.0]# ln -sf /usr/local/nginx1.14/sbin/nginx /usr/local/sbin/  
#Make soft connection to the new nginx command
[root@nginx ~]# nginx -s reload                  #nginx restart the service

At this point, the new module is added.

1. proxy cache usage of Nginx

Caching is to cache js, css, image and other static files from the back-end server to the cache directory specified by nginx, which can not only reduce the burden of the back-end server, but also speed up the access speed. However, it is a problem that the cache is cleaned up in time. Therefore, the module of NGX cache purge is needed to clean up the cache manually before the expiration time.

The commonly used instructions in the proxy module are proxy pass and proxy cache.
The web cache function of nginx is mainly completed by proxy cache, FastCGI cache instruction set and related instruction set. Proxy cache instruction is responsible for the static content of back-end server of reverse proxy cache. FastCGI cache is mainly used to process FastCGI dynamic process cache (it is not recommended to cache dynamic pages in production environment).

The configuration is as follows:

http {
 include       mime.types;
    default_type  application/octet-stream;
    upstream backend {
        sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }
   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'     #Note that the semicolon after this line is removed.
                        '"$upstream_cache_status"';    #Increase this line to record the cache hit rate to the log
    access_log  logs/access.log  main;

        #Add the following lines of configuration
    proxy_buffering on;   #Turn on buffering the response of back-end server when proxy
    proxy_temp_path /usr/local/nginx1.14/proxy_temp;
    proxy_cache_path /usr/local/nginx1.14/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;
# The server field is configured as follows:
server {
        listen       80;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location ~/purge(/.*) {    #This purge field is used to manually clear the cache
        allow 127.0.0.1;
        allow 192.168.20.0/24;
        deny all;
        proxy_cache_purge my-cache $host$1$is_args$args;
        }
        location / {
            proxy_pass http://backend;
    #Add the following configuration in the "/" field to configure the cache related
            proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
            proxy_cache my-cache;
            add_header Nginx-Cache $upstream_cache_status;
            proxy_cache_valid 200 304 301 302 8h;
            proxy_cache_valid 404 1m;
            proxy_cache_valid any 1d;
            proxy_cache_key $host$uri$is_args$args;
            expires 30d;
        }
}
#After editing, save to exit
[root@nginx conf]# nginx -t        #Check profile
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax 
nginx: [emerg] mkdir() "/usr/local/nginx1.10/proxy_temp" failed (2: No suc
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test failed
#Prompt that the corresponding directory is not found
[root@nginx conf]# mkdir -p /usr/local/nginx1.10/proxy_temp    #Then create the corresponding directory
[root@nginx conf]# nginx -t      #Check again, OK
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax  is ok
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test is successful
[root@nginx conf]# nginx -s reload         #Restart Nginx service

Client access test (using Google browser, press F12 before access):

Press "F5" to refresh:

MISS indicates a MISS and the request is sent to the back-end; HIT cache is HIT (because the Nginx server has no cache of the corresponding web page for the first visit, it will be sent to the back-end web. When the second refresh, there will be a cache in the Nginx local area, so it is "HIT", cache HIT).

To view the access log of Nginx, you can also view the recorded cache related information:

[root@nginx conf]# tail ../logs/access.log      #View access log

The client accesses the following addresses (the client must be in the network segment allowed by location ~/purge(/. *)), and can manually clear the cache on the Nginx server before the cache fails (if not, manually clear the cache of the client browser first):

The picture here is truncated wrongly. Sorry, if you need to clear the cache manually, if the URL specified during the visit is "192.168.20.5/index.html", then the URL to be specified during the cache clearing is "192.168.20.5/purge/index.html". If the URL specified during the visit is "192.168.20.5", then the URL to be specified during the cache clearing manually is "192.168.20 5/purge/ "

The explanation of the above configuration is as follows:

  • Proxy | buffering [on | off]; when the proxy is on, turn on or off the response of the buffer back-end server. When the buffer is on, nginx receives the response from the proxy server as soon as possible and stores it in the buffer.
  • Proxy temp path: cache temporary directory. The back-end response is not directly returned to the client. Instead, it is written to a temporary file, and then it is rename d as a cache and placed in the proxy cache path. After version 0.8.9, the two directories of temp and cache are allowed to be on different file systems (partitions). However, in order to reduce performance loss, it is recommended to set them on one file system.
  • Proxy cache path: set the cache directory. The file name in the directory is the MD5 value of cache key.
  • Level = 1:2 keys_zone = my cache: 50m indicates that the level 2 directory structure is adopted. The first level directory has only one character, which is set by level = 1:2. There are two levels of directories in total, and the subdirectory name is composed of two characters. The name of the Web cache is my cache, and the size of the memory cache is 100MB. This buffer zone can be used multiple times. The cache file name seen on the file system is similar to / usr / local / nginx 1.10/proxy_cache / C / 29 / b7f54b2df7773722d382f4809d65029c.
  • inactive=600 max_size=2g indicates that the content that has not been accessed in 600 minutes will be automatically cleared, and the maximum cache space of the hard disk is 2GB. Exceeding this value will clear the least recently used data.
  • Proxy cache: refers to the previously defined cache my cache.
  • Proxy cache key: defines how to generate the key of the cache, sets the key value of the web cache, and nginx stores the cache according to the key value md5 hash.
  • Proxy cache valid: set different cache times for different response status codes, such as 200, 302 and other normal results can be cached for a longer time, while 404, 500 and other cache times are set for a shorter time, which will expire when the file arrives, regardless of whether it has just been accessed.
  • The add header instruction is used to set the response header. Syntax: add header name value.
  • The variable "upstream cache status" displays the status of the cache. We can add an http header to the configuration to display this status.
    ###########$upstream? Cache? Status includes the following states:############
  • MISS misses and the request is sent to the back end;
  • HIT cache HIT;
  • EXPIRED cache has EXPIRED and the request is sent to the back end;
  • UPDATING is UPDATING the cache and will use the old reply;
  • STALE backend will get overdue response;
  • Expires: set expires: or cache control: Max age in the response header to return the browser cache expiration time to the client.

2. Optimize the compression function of Nginx service

Change the configuration file as follows (please refer to the end of the blog for explanation):

http {
    include       mime.types;
    default_type  application/octet-stream;
    brotli on;
    brotli_types text/plain text/css text/xml application/xml application/json;
    brotli_static off;       #Allow to find pre processed compressed files ending in. br. The optional values are on, off and always.
    brotli_comp_level 11;        #Compression level, range is "1-14", the larger the value, the higher the compression ratio
    brotli_buffers 16 8k;      #Number and size of read buffers
    brotli_window 512k;       #Slide window size
    brotli_min_length 20;    #Specifies the minimum bytes of compressed data
    gzip  on;        #Turn on gzip to compress output and reduce network transmission.
    gzip_comp_level 6;     # gzip compression ratio, 1 compression ratio minimum processing speed is the fastest, 9 compression ratio is the largest but processing speed is the slowest (fast transmission but more CPU consumption).
    gzip_http_version 1.1;    #It is used to identify the version of HTTP protocol. In the early browser, Gzip compression is not supported, and users will see garbled code. Therefore, in order to support the previous version, this option is added. If you use the reverse agent of Nginx and expect to enable Gzip compression, because the terminal communication is http/1.1 protocol, please set it to 1.1.
    gzip_proxied any;     #When Nginx is used as the reverse proxy, it is determined whether to enable gzip compression in response to proxy requests according to some requests and responses. Whether to compress depends on the "Via" field in the request header. Multiple different parameters can be specified in the instruction at the same time, with the following meanings:
# Off – turns off compression of all proxy result data
# expired – enables compression if the header contains "Expires" header information
# no-cache – Enable compression if header Header contains“ Cache-# Control: no cache "header information
# No store – enables compression if the header contains "cache control: no store" header information
# Private – enables compression if the header contains "cache control: private" header information
# No? Last? Modified – enables compression if the header does not contain "last modified" header information
# No? ETag – enables compression if the header does not contain "ETag" header information
# auth – enables compression if the header contains "Authorization" header information
# any – enable compression unconditionally
    gzip_min_length 1k;
    gzip_buffers 16 8k;
    gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;
    gzip_vary on;      #It is related to the HTTP header. Add a variable header to the proxy server. Some browsers support compression and some don't, so avoid wasting the unsupported and also compress. Therefore, judge whether compression is needed according to the HTTP header of the client
    client_max_body_size 10m;     #The maximum number of single file bytes that clients are allowed to request. If you upload a large file, please set its limit value
    client_body_buffer_size 128k;    #Buffer agent the maximum number of bytes requested by the buffer client
        server_tokens off;     #Hide version number of nginx
        #Here is the HTTP proxy module:
    proxy_connect_timeout 75;      #Timeout of nginx connection with backend server (proxy connection timeout)
    proxy_send_timeout 75;
    proxy_read_timeout 75;    #Defines the timeout for reading the response from the back-end server. This timeout refers to the maximum time interval between two adjacent read operations, rather than the maximum time for the entire response transmission to complete. If the back-end server does not transmit any data within the timeout period, the connection will be closed.
    proxy_buffer_size 4k;    #Set the size of the buffer to size. When nginx reads the response from the proxied server, it uses this buffer to save the beginning of the response. This section usually contains a small response header. By default, the buffer size is equal to the size of a buffer set by the proxy buffer instruction, but it can also be set smaller.
    proxy_buffers 4 32k;     #Syntax: proxy_buffers the_number is_size; set the number of buffers for each connection as number and the size of each buffer as size. These buffers are used to hold responses read from the proxied server. Each buffer is equal to the size of one memory page by default. Whether this value is 4K or 8K depends on the platform.
#Attachment:[root@nginx ~]# getconf PAGESIZE     #Viewing the size of a Linux memory page
#4096

    proxy_busy_buffers_size 64k;    #Buffer size under high load (the default size is twice the size of single block buffer set by proxy buffer instruction)
    proxy_temp_file_write_size 64k;    #This option limits the size of temporary files that are written each time the cached server responds to them.
    proxy_buffering on;
    proxy_temp_path /usr/local/nginx1.14/proxy_temp;
    proxy_cache_path /usr/local/nginx1.14/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;
    upstream backend {
       sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
                      '"$upstream_cache_status"';
    access_log  logs/access.log  main;
    sendfile        on;     #Turn on efficient file transfer mode.
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;        #Long connection timeout, in seconds, can reduce the cost of rebuilding a connection when a large number of small files are requested for a long connection. If the setting time is too long and there are many users, a long time to keep the connection will take up a lot of resources.
    server {
        listen       80;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location ~/purge(/.*) {
        allow 127.0.0.1;
        allow 192.168.20.0/24;
        deny all;
        proxy_cache_purge my-cache $host$1$is_args$args;
        }
        location / {
            proxy_pass http://Backend; ා request to turn to the server list defined by backend, i.e. reverse agent, corresponding to the upstream load balancer. You can also proxy through http://ip: port.
            proxy_redirect off;     #Specifies whether to modify the location header and refresh header values in the response header returned by the proxy server
#For example:
# Set the replacement text for the backend server "Location" response header and "Refresh" response header. Suppose the back-end server returns
# If the response header is "Location: http://localhost:8000/two/some/uri /", the instruction proxy ﹐ redirect  
# http://localhost:8000/two/ http://frontend/one /; the string will be rewritten as "Location: 
# http://frontend/one/some/uri/". 
            proxy_set_header Host $host;  #Allows you to redefine or add request headers to back-end servers.
#The meaning of Host is to indicate the Host name of the request. nginx reverse proxy server will send the request to the real server,
#And the host field in the request header is rewritten as the server set by the proxy pass instruction. Because nginx as a reverse agent
#If the backend real server is set with a similar anti-theft chain or according to the host field in the http request header
#If the nginx of the reverse agent layer does not override the host field in the request header, the request will fail.
            proxy_set_header X-Real-IP $remote_addr;        
#The web server gets the real ip of the user. However, to get the real ip of the user, you can also use the following X-Forward-For
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#The back-end Web server can obtain the user's real IP through x-forward-for. The x-forward-for field
#Indicates who initiated the http request? If the reverse proxy does not override the request header, the back end
#When the real server processes, it will think that all requests come from the reverse proxy server. If there is a protection policy in the back end
#Then the machine will be sealed off. Therefore, in order to modify the request header of http, two configurations will be added to configure nginx as the reverse proxy
          #The following two are the request headers for modifying http:
            proxy_set_header Host $host;
                        proxy_set_header X-Forward-For $remote_addr;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
#Add failover. If the back-end server returns 502, 504, execution timeout and other errors,
#The request is automatically forwarded to another server in the upstream load balancing pool for failover.
            proxy_cache my-cache;
            add_header Nginx-Cache $upstream_cache_status;
            proxy_cache_valid 200 304 301 302 8h;
            proxy_cache_valid 404 1m;
            proxy_cache_valid any 1d;
            proxy_cache_key $host$uri$is_args$args;
            expires 30d;
                }
   location /nginx_status {        
                stub_status on;
                access_log off;
                allow 192.168.31.0/24;
                deny all;
            }
          ....................#Omit part of the content
}
#Save the changes and exit
[root@nginx nginx1.14]# nginx -t     #Check profile
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test is successful
[root@nginx nginx1.14]# nginx -s reload        #Restart Nginx service

Verification:

1. Visit the following address to view the status statistics page of the Nginx server:

2. Check whether the GZIP function is on:

3. Test whether the br compression function is enabled (command line access is required):

Additional: http {} field and server {} field have no comment configuration file as follows:

http {
    include       mime.types;
    default_type  application/octet-stream;
    brotli on;
    brotli_types text/plain text/css text/xml application/xml application/json;
    brotli_static off;
    brotli_comp_level 11;
    brotli_buffers 16 8k;
    brotli_window 512k;
    brotli_min_length 20;
    gzip  on;
    gzip_comp_level 6;
    gzip_http_version 1.1;
    gzip_proxied any;
    gzip_min_length 1k;
    gzip_buffers 16 8k;
    gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;
    gzip_vary on;
    client_max_body_size 10m;
    client_body_buffer_size 128k;
    server_tokens off;
    proxy_connect_timeout 75;
    proxy_send_timeout 75;
    proxy_read_timeout 75;
    proxy_buffer_size 4k;
    proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
    proxy_temp_file_write_size 64k;
    proxy_buffering on; 
    proxy_temp_path /usr/local/nginx1.14/proxy_temp;
    proxy_cache_path /usr/local/nginx1.14/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;
    upstream backend {
       sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }   

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
                      '"$upstream_cache_status"';
    access_log  logs/access.log  main;
    sendfile        on; 
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65; 

    #gzip  on;
   server {
        listen       80;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location ~/purge(/.*) {
        allow 127.0.0.1;
        allow 192.168.20.0/24;
        deny all;
        proxy_cache_purge my-cache $host$1$is_args$args;
        }
        location / {
            proxy_pass http://backend;
            proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
            proxy_cache my-cache;
            add_header Nginx-Cache $upstream_cache_status;
            proxy_cache_valid 200 304 301 302 8h;
            proxy_cache_valid 404 1m;
            proxy_cache_valid any 1d;
            proxy_cache_key $host$uri$is_args$args;
            expires 30d;
        }
            location /nginx_status {
                stub_status on;
                access_log off;
                allow 192.168.20.0/24;
                deny all;
            }

        location = /50x.html {
            root   html;
        }
     }
}

————————Thank you for reading————————

Tags: Linux Nginx Web Server Session xml

Posted on Mon, 13 Jan 2020 20:46:50 -0800 by connex