Spring boot classic learning notes

Article directory

Introduction: Course Overview

1. What is spring boot

We know that since 2002, Spring has been developing rapidly, and now it has become a real standard in the development of Java EE (Java Enterprise Edition). However, with the development of technology, Java EE using Spring has gradually become cumbersome, and a large number of XML documents exist in the project. Tedious configuration and integration of the configuration of the third-party framework lead to the reduction of development and deployment efficiency.

In October 2012, Mike Youngstrom created a functional request in Spring jira, which requires support for the container free Web application architecture in the spring framework. He talked about configuring Web container services within the main container to boot the spring container. This is an excerpt from jira's request:

I think Spring's Web application architecture can be greatly simplified if it provides tools and reference architectures to utilize Spring components and configuration models from top to bottom. Embed and unify the configuration of these common Web container services in the Spring container guided by the simple main() method.

This requirement prompted the research and development of Spring Boot project started in early 2013. Today, the version of Spring Boot has reached 2.0.3 RELEASE. Spring Boot is not a solution to replace spring, but a tool to enhance spring developer experience in close combination with spring framework.

It integrates a large number of commonly used third-party library configurations. In Spring Boot applications, these third-party libraries can be almost out of the box with zero configuration. Most Spring Boot applications only need a very small amount of configuration code (Java based configuration). Developers can focus on business logic more.

2. Why learn Spring Boot

2.1 from the official point of view of Spring

Let's open Spring's Official website , you can see the following figure:

We can see the official orientation of Spring Boot in the figure: Build Anything, Build Anything. Spring Boot is designed to start and run as quickly as possible, with minimal pre spring configuration. At the same time, let's also take a look at the official positioning of the latter two:

Spring cloud: coordinate everything;
Spring cloud data flow: Connect everything, Connect everything.

After a careful taste, the wording of Spring Boot, Spring cloud and Spring cloud data flow positioning on Spring official website is very tasteful. At the same time, it can be seen that Spring official attaches great importance to these three technologies, which is the focus of learning now and in the future (relevant talent courses of Spring cloud will also be launched at that time).

2.2 from the advantages of Spring Boot

What are the advantages of Spring Boot? What are the main problems solved for us? Let's illustrate as follows:

2.2.1 good genes

Spring Boot was born with Spring 4.0. Literally, Boot means guidance. Therefore, spring Boot aims to help developers quickly build the spring framework. Spring Boot inherits the excellent genes of the original spring framework, making spring more convenient and efficient in use.

2.2.2 simplified coding

For example, if we want to create a web project, friends who use Spring all know that when we use Spring, we need to add multiple dependencies to the pom file, and Spring Boot will help to develop a web container quickly. In Spring Boot, we just need to add the following starter web dependency to the pom file.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>

We can see that Spring Boot, the starter web, already contains multiple dependencies, including those that need to be imported in the Spring project. Let's take a look at some of them, as follows:

<!-- .....Omit other dependencies -->
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>5.0.7.RELEASE</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>5.0.7.RELEASE</version>
    <scope>compile</scope>
</dependency>

From this, we can see that Spring Boot greatly simplifies our coding. We don't need to import dependency one by one, just rely on one directly.

2.2.3 simplified configuration

Although spring makes Java EE lightweight framework, it was once regarded as "configuration hell" because of its tedious configuration. A variety of XML and Annotation configurations can be dazzling, and if there are many configurations, it is difficult to find out the reason if there is an error. Spring Boot uses Java Config to configure spring. for instance:

I create a new class, but I don't use @ Service annotation, that is to say, it's a common class, so how can we make it a Bean for Spring to manage? Only need @ Configuration and @ Bean annotation, as follows:

public class TestService {
    public String sayHello () {
        return "Hello Spring Boot!";
    }
}
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class JavaConfig {
    @Bean
    public TestService getTestService() {
        return new TestService();
    }
}

@Configuration indicates that the class is a configuration class, @ Bean indicates that the method returns a Bean. In this way, TestService is used as a Bean for Spring to manage. In other places, if we need to use the Bean, as before, we can use it directly by injecting @ Resource annotation, which is very convenient.

@Resource
private TestService testService;

In addition, in terms of deployment configuration, Spring used to have multiple xml and properties configurations. In Spring Boot, only one application.yml is needed.

2.2.4 simplified deployment

When using Spring, we need to deploy Tomcat on the server when deploying the project, and then we need to type the project into war package and throw it into Tomcat. After using Spring Boot, we don't need to deploy Tomcat on the server, because Spring Boot embeds tomcat, we only need to type the project into jar package, and use java -jar xxx.jar to start the project.

In addition, it also reduces the basic requirements for the running environment. JDK is included in the environment variable.

2.2.5 simplified monitoring

We can introduce the Spring Boot start actor dependency and directly use REST to obtain the run-time performance parameters of the process, so as to achieve the purpose of monitoring, which is more convenient. However, Spring Boot is only a micro framework, which does not provide the corresponding supporting functions of service discovery and registration, no peripheral monitoring integration scheme, and no peripheral security management scheme. Therefore, in the micro service architecture, Spring Cloud is also needed to work together.

2.3 from the perspective of future development trend

Microservice is the trend of future development. The project will gradually shift from traditional architecture to microservice architecture, because microservice can make different teams focus on a smaller range of work responsibilities, use independent technology, and deploy more safely and frequently. It inherits the excellent characteristics of spring, inherits the same line with spring, and supports various implementation methods of REST API. Spring Boot is also a technology highly recommended by the government. It can be seen that Spring Boot is a major trend in the future.

3. What can be learned in this course

This course uses the latest version of Spring Boot 2.0.3 RELEASE. The course articles are all scenes and demo s separated from the actual project by the author. The goal is to lead learners to quickly start Spring Boot and quickly apply Spring Boot related technology points in micro service projects. The whole article is divided into two parts: basic and advanced.

The basic part (lesson 01-10) mainly introduces some of the most commonly used function points of Spring Boot in the project, aiming to lead learners to quickly grasp the knowledge points needed by Spring Boot in development, and be able to apply the relevant Spring Boot technologies to the actual project architecture. This part takes the Spring Boot framework as the main line, including Json data encapsulation, logging, attribute configuration, MVC support, online documents, template engine, exception handling, AOP processing, persistence layer integration, etc.

The advanced part (11-17 lessons) mainly introduces some technical points of Spring Boot in the project, including some integrated components, which aims to lead learners to integrate quickly and complete corresponding functions when encountering specific scenarios in the project. This part takes the Spring Boot framework as the main line, including interceptor, listener, cache, security authentication, word segmentation plug-in, message queue and so on.

After reading this series of articles carefully, learners will quickly understand and master the most commonly used technical points of Spring Boot in the project. At the end of the course, the author will build an empty structure of Spring Boot project based on the course content, which is also separated from the actual project. Learners can use this structure in the actual project, and have the ability to use Spring Boot for the actual project Project development capability.

All source codes of the course are available for free download: Download address.

4. People suitable for reading

This course is suitable for the following people:

  • Have a certain Java language foundation, know Spring, Maven's school students or self scholars
  • Staff with traditional project experience who want to develop in the direction of microservice
  • People interested in new technologies and Spring Boot
  • Researchers who want to know about Spring Boot 2.0.3

5. Development environment and plug-ins of the course

Development environment of this course:

  • Development tool: IDEA 2017
  • JDK version: JDK 1.8
  • Spring Boot version: 2.0.3 RELEASE
  • Maven version: 3.5.2

Plug in involved:

  • FastJson
  • Swagger2
  • Thymeleaf
  • MyBatis
  • Redis
  • ActiveMQ
  • Shiro
  • Lucence

6. Course catalogue

  • Introduction: Course Overview
  • Lesson 01: setting up the Spring Boot development environment and launching the project
  • Lesson 02: Spring Boot returns Json data and data encapsulation
  • Lesson 03: Spring Boot uses slf4j for logging
  • Lesson 04: project property configuration in Spring Boot
  • Lesson 05: MVC support in Spring Boot
  • Lesson 06: Spring Boot integration Swagger2 presents online interface documents
  • Lesson 07: Spring Boot integrates the tymeleaf template engine
  • Lesson 08: Global exception handling in Spring Boot
  • Lesson 09: Faceted AOP processing in Spring Boot
  • Lesson 10: integrating MyBatis in Spring Boot
  • Lesson 11: Spring Boot transaction configuration management
  • Lesson 12: using listeners in Spring Boot
  • Lesson 13: using interceptors in Spring Boot
  • Lesson 14: integrating Redis in Spring Boot
  • Lesson 15: integrating ActiveMQ in Spring Boot
  • Lesson 16: integrating Shiro in Spring Boot
  • Lesson 17: forming Lucence in Spring Boot
  • Lesson 18: Spring Boot builds the architecture in the actual project development

Welcome to my official account for WeChat public: Wu brother chat programming.

Lesson 01: setting up the Spring Boot development environment and launching the project

The previous section introduces the features of Spring Boot. This section mainly explains and analyzes the configuration of jdk, the construction and start of Spring Boot project, and the structure of Spring Boot project.

1. jdk configuration

This course uses IDEA for development. The way to configure jdk in IDEA is very simple. Open file > project structure, as shown in the following figure:

  1. Select SDKs
  2. Select the installation directory of the local jdk in the JDK home path
  3. Custom Name for jdk in Name

Through the above three steps, you can import the locally installed jdk. If you are a friend Using STS or eclipse, you can add them in two steps:

  • Window - > preference - > java - > inserted jres to add a local jdk.
  • Window > preference > java > compiler select jre, which is consistent with jdk.

2. Construction of spring boot project

2.1 quick build of idea

In IDEA, you can quickly build a Spring Boot project through file - > New - > project. As follows, select Spring Initializr, select the jdk we just imported in the Project SDK, and click Next to get the configuration information of the project.

  • Group: fill in the enterprise domain name. This course uses com.itcodai
  • Artifact: fill in the project name. In this course, the project name of each lesson is the command of course + lesson number. Here, use course01
  • Dependencies: you can add the dependency information needed in our project according to the actual situation. This course only needs to select Web.

2.2 official construction

The second method can be built through official construction, and the steps are as follows:

  • Visit http://start.spring.io/.
  • Enter the corresponding Spring Boot version, Group and Artifact information, and project dependencies on the page, and then create the project.
  • After decompression, use IDEA to import the maven project: File - > New - > model from existing source, and then select the extracted project folder. If you are a friend using eclipse, you can select the extracted project folder through import - > existing maven Projects - > next.

2.3 maven configuration

After creating the Spring Boot project, you need to configure Maven. Open file - > settings, search maven, and configure the local Maven information. As follows:

Select the installation path of local Maven in Maven home directory and the path of configuration file of local Maven in User settings file. In the configuration file, we configure the image of Alibaba in China, so that when downloading Maven dependency, the speed is very fast.

<mirror>
	<id>nexus-aliyun</id>
	<mirrorOf>*</mirrorOf>
	<name>Nexus aliyun</name>
	<url>http://maven.aliyun.com/nexus/content/groups/public</url>
</mirror>

If you are a friend using eclipse, you can configure it through window -- > preference -- > Maven -- > user settings in the same way as above.

2.4 coding configuration

Similarly, after a new project is created, we usually need to configure coding, which is very important. Many beginners will forget this step, so we need to develop good habits.

In IDEA, open file - > settings, search encoding, and configure the local encoding information. As follows:

If you are a friend using eclipse, you need to set the following code:

  • window > Preferences > General > workspace, change Text file encoding to utf-8
  • window > Preferences > General > content types, select Text, and fill Default encoding in utf-8

OK, after the coding setting is completed, the project can be started.

3. Engineering structure of spring boot project

The Spring Boot project has three modules in total, as shown in the following figure:

  • src/main/java path: mainly write business programs
  • src/main/resources path: store static files and configuration files
  • src/test/java path: mainly write test program

By default, as shown in the figure above, a startup class Course01Application will be created. There is a @ SpringBootApplication annotation on the class, and there is a main method in the startup class. Yes, Spring Boot can only run the main method, which is very convenient. In addition, Spring Boot is internally integrated with tomcat. We do not need to manually configure tomcat. Developers only need to pay attention to specific business logic.

So far, Spring Boot has been started successfully. In order to see the effect clearly, we write a Controller to test it, as follows:

package com.itcodai.course01.controller;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/start")
public class StartController {

    @RequestMapping("/springboot")
    public String startSpringBoot() {
        return "Welcome to the world of Spring Boot!";
    }
}

Re run the main method to start the project, and enter localhost:8080/start/springboot in the browser. If you see "Welcome to the world of Spring Boot!", congratulations on the success of the project! Spring Boot is so simple and convenient! The default port number is 8080. If you want to modify it, you can use server.port in the application.yml file to specify the port manually, such as 8001 port:

server:
  port: 8001

4. summary

In this section, we quickly learned how to import jdk in IDEA, how to configure maven and coding with IDEA, and how to quickly create and start Spring Boot project. IDEA is very friendly to Spring Boot support. We suggest that you use IDEA to develop Spring Boot. From the next lesson, we will really enter the Spring Boot learning.
Course source code download address: Poke me downloading

Lesson 02: Spring Boot returns Json data and data encapsulation

In project development, Json format is used for data transmission between interfaces, front and back ends. In Spring Boot, it's very simple for the interface to return data in Json format. In Controller, @ RestController annotation can return data in Json format, @ RestController is also Spring Boot For a new annotation, let's click it to see what it contains.

@Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Controller
@ResponseBody
public @interface RestController {
    String value() default "";
}

It can be seen that the @ responsecontroller annotation contains the original @ Controller and @ ResponseBody annotations. Friends who have used Spring have known the @ Controller annotation very well. Here, we will not repeat it again. The @ ResponseBody annotation is to convert the returned data structure into Json format. So by default, @ RestController annotation is used to convert the returned data structure into Json format. The default Json parsing technology framework in Spring Boot is jackson. Click Spring Boot starter web dependency in pom.xml to see a Spring Boot starter Json dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-json</artifactId>
    <version>2.0.3.RELEASE</version>
    <scope>compile</scope>
</dependency>

The dependency in Spring Boot is well encapsulated. You can see a lot of dependency in Spring Boot starter XXX series, which is one of the characteristics of Spring Boot. It doesn't need to introduce a lot of dependency artificially. The starter XXX series directly contains the necessary dependency. So we can see the above Spring Boot starter JSON dependency again:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.datatype</groupId>
    <artifactId>jackson-datatype-jdk8</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.datatype</groupId>
    <artifactId>jackson-datatype-jsr310</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.module</groupId>
    <artifactId>jackson-module-parameter-names</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>

So far, we know that the default JSON parsing framework used in Spring Boot is jackson. Let's take a look at the default jackson framework's conversion to JSON for common data types.

1. Default handling of Json by spring boot

In actual projects, common data structures are nothing but class objects, List objects and Map objects. Let's see how the default jackson framework transforms these three common data structures into json format.

1.1 create User entity class

In order to test, we need to create an entity class. Here we use User to demonstrate.

public class User {
    private Long id;
    private String username;
    private String password;
	/* Omit get, set and band parameter construction methods */
}

1.2 create Controller class

Then we create a Controller to return User object, list < User > and map < string, Object > respectively.

import com.itcodai.course02.entity.User;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

@RestController
@RequestMapping("/json")
public class JsonController {

    @RequestMapping("/user")
    public User getUser() {
        return new User(1, "Ni Sheng Wu", "123456");
    }

    @RequestMapping("/list")
    public List<User> getUserList() {
        List<User> userList = new ArrayList<>();
        User user1 = new User(1, "Ni Sheng Wu", "123456");
        User user2 = new User(2, "Master class", "123456");
        userList.add(user1);
        userList.add(user2);
        return userList;
    }

    @RequestMapping("/map")
    public Map<String, Object> getMap() {
        Map<String, Object> map = new HashMap<>(3);
        User user = new User(1, "Ni Sheng Wu", "123456");
        map.put("Author information", user);
        map.put("Blog address", "http://blog.itcodai.com");
        map.put("CSDN address", "http://blog.csdn.net/eson_15");
        map.put("Number of fans", 4153);
        return map;
    }
}

1.3 testing json returned by different data types

OK, write the interface, and return a User object, a List set and a Map set respectively. The value s in the Map set store different data types. Let's test the effect one by one.

Enter: localhost:8080/json/user in the browser to return json as follows:

{"id":1,"username":"Ni Sheng Wu","password":"123456"}

Enter: localhost:8080/json/list in the browser to return json as follows:

[{"id":1,"username":"Ni Sheng Wu","password":"123456"},{"id":2,"username":"Master class","password":"123456"}]

Enter: localhost:8080/json/map in the browser to return json as follows:

{"Author information":{"id":1,"username":"Ni Sheng Wu","password":"123456"},"CSDN address":"http://blog.csdn.net/eson_15","Number of fans":4153,"Blog address":"http://blog.itcodai.com"}

It can be seen that no matter what data type is in the map, it can be converted to the corresponding json format, which is very convenient.

1.4 handling of null in Jackson

In the actual project, we will inevitably encounter some null values. When we transfer json, we do not want these null values to appear. For example, we expect all null values to become empty strings such as "" when we transfer json. What should we do? In Spring Boot, we can do the following configuration to create a new jackson configuration class:

import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.databind.JsonSerializer;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializerProvider;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;

import java.io.IOException;

@Configuration
public class JacksonConfig {
    @Bean
    @Primary
    @ConditionalOnMissingBean(ObjectMapper.class)
    public ObjectMapper jacksonObjectMapper(Jackson2ObjectMapperBuilder builder) {
        ObjectMapper objectMapper = builder.createXmlMapper(false).build();
        objectMapper.getSerializerProvider().setNullValueSerializer(new JsonSerializer<Object>() {
            @Override
            public void serialize(Object o, JsonGenerator jsonGenerator, SerializerProvider serializerProvider) throws IOException {
                jsonGenerator.writeString("");
            }
        });
        return objectMapper;
    }
}

Then we modify the above interface to return map and change several values to null to test:

@RequestMapping("/map")
public Map<String, Object> getMap() {
    Map<String, Object> map = new HashMap<>(3);
    User user = new User(1, "Ni Sheng Wu", null);
    map.put("Author information", user);
    map.put("Blog address", "http://blog.itcodai.com");
    map.put("CSDN address", null);
    map.put("Number of fans", 4153);
    return map;
}

Restart the project and enter: localhost:8080/json/map again. You can see that jackson has converted all null fields into empty strings.

{"Author information":{"id":1,"username":"Ni Sheng Wu","password":""},"CSDN address":"","Number of fans":4153,"Blog address":"http://blog.itcodai.com"}

2. Use Alibaba FastJson settings

2.1 comparison between Jackson and fastJson

There are many friends who are used to using Alibaba's fastJson to do the relevant work of json conversion in the project. At present, we use Alibaba's fastJson in our project. What are the differences between jackson and fastJson? According to the comparison of information published on the Internet, the following table is obtained.

option fastJson jackson
How easy it is to start easily secondary
Advanced feature support secondary rich
Official documents, Example support Chinese English
Processing json speed Slightly faster fast

As for the comparison between fastJson and jackson, there are a lot of materials available on the Internet, mainly to choose the appropriate framework according to the actual project situation. From the perspective of extension, fastJson is not as flexible as jackson. From the perspective of speed or starting difficulty, fastJson can be considered. Currently, Alibaba's fastJson is used in our project, which is quite convenient.

2.2 fastJson dependency import

Using fastJson requires importing dependencies. This course uses version 1.2.35. The dependencies are as follows:

<dependency>
	<groupId>com.alibaba</groupId>
	<artifactId>fastjson</artifactId>
	<version>1.2.35</version>
</dependency>

2.2 using fastJson to handle null

When using fastJson, the handling of null is somewhat different from that of jackson. We need to inherit the WebMvcConfigurationSupport class, and then override the configureMessageConverters method. In the method, we can choose to configure the scene for null conversion. As follows:

import com.alibaba.fastjson.serializer.SerializerFeature;
import com.alibaba.fastjson.support.config.FastJsonConfig;
import com.alibaba.fastjson.support.spring.FastJsonHttpMessageConverter;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.MediaType;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurationSupport;

import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;

@Configuration
public class fastJsonConfig extends WebMvcConfigurationSupport {

    /**
     * Use alifastjson as JSON MessageConverter
     * @param converters
     */
    @Override
    public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
        FastJsonHttpMessageConverter converter = new FastJsonHttpMessageConverter();
        FastJsonConfig config = new FastJsonConfig();
        config.setSerializerFeatures(
                // Leave map empty fields
                SerializerFeature.WriteMapNullValue,
                // Convert null of String type to ''
                SerializerFeature.WriteNullStringAsEmpty,
                // Convert null of type Number to 0
                SerializerFeature.WriteNullNumberAsZero,
                // Convert null of type List to []
                SerializerFeature.WriteNullListAsEmpty,
                // Convert null of Boolean type to false
                SerializerFeature.WriteNullBooleanAsFalse,
                // Avoid circular references
                SerializerFeature.DisableCircularReferenceDetect);

        converter.setFastJsonConfig(config);
        converter.setDefaultCharset(Charset.forName("UTF-8"));
        List<MediaType> mediaTypeList = new ArrayList<>();
        // To solve the problem of Chinese scrambling is equivalent to adding the attribute produces = "application/json" to @ RequestMapping on the Controller
        mediaTypeList.add(MediaType.APPLICATION_JSON);
        converter.setSupportedMediaTypes(mediaTypeList);
        converters.add(converter);
    }
}

3. Encapsulate the unified returned data structure

The above are several representative examples of Spring Boot returning json, but in actual projects, in addition to encapsulating data, we often need to add some other information to the returned json, such as returning some status codes and returning some msg to the caller, so that the caller can make some logical judgments according to the code or MSG. So in the actual project, we need to encapsulate a unified json return structure to store the return information.

3.1 define unified json structure

Because the type of encapsulated json data is uncertain, we need to use generics when defining a unified json structure. In the unified json structure, the attributes include data, status code and prompt information. The construction method can be added according to the actual business needs. Generally speaking, there should be a default return structure and a user specified return structure. As follows:

public class JsonResult<T> {

    private T data;
    private String code;
    private String msg;

    /**
     * If no data is returned, the default status code is 0, and the prompt message is: operation succeeded!
     */
    public JsonResult() {
        this.code = "0";
        this.msg = "Operation succeeded!";
    }

    /**
     * If no data is returned, the status code and prompt information can be specified manually
     * @param code
     * @param msg
     */
    public JsonResult(String code, String msg) {
        this.code = code;
        this.msg = msg;
    }

    /**
     * When data is returned, the status code is 0, and the default prompt is: operation succeeded!
     * @param data
     */
    public JsonResult(T data) {
        this.data = data;
        this.code = "0";
        this.msg = "Operation succeeded!";
    }

    /**
     * There is data return, status code is 0, and prompt information is specified manually
     * @param data
     * @param msg
     */
    public JsonResult(T data, String msg) {
        this.data = data;
        this.code = "0";
        this.msg = msg;
    }
    // Omit get and set methods
}

3.2 modify return value type and test in Controller

Because JsonResult uses generics, all return value types can use this unified structure. In a specific scenario, you can replace generics with specific data types, which is very convenient and easy to maintain. In the actual project, we can continue to encapsulate. For example, the status code and prompt information can define an enumeration type. In the future, we only need to maintain the data in the enumeration type (we will not expand it in this course). According to the above JsonResult, we rewrite the Controller as follows:

@RestController
@RequestMapping("/jsonresult")
public class JsonResultController {

    @RequestMapping("/user")
    public JsonResult<User> getUser() {
        User user = new User(1, "Ni Sheng Wu", "123456");
        return new JsonResult<>(user);
    }

    @RequestMapping("/list")
    public JsonResult<List> getUserList() {
        List<User> userList = new ArrayList<>();
        User user1 = new User(1, "Ni Sheng Wu", "123456");
        User user2 = new User(2, "Master class", "123456");
        userList.add(user1);
        userList.add(user2);
        return new JsonResult<>(userList, "Get user list succeeded");
    }

    @RequestMapping("/map")
    public JsonResult<Map> getMap() {
        Map<String, Object> map = new HashMap<>(3);
        User user = new User(1, "Ni Sheng Wu", null);
        map.put("Author information", user);
        map.put("Blog address", "http://blog.itcodai.com");
        map.put("CSDN address", null);
        map.put("Number of fans", 4153);
        return new JsonResult<>(map);
    }
}

We re-enter: localhost:8080/jsonresult/user in the browser to return json as follows:

{"code":"0","data":{"id":1,"password":"123456","username":"Ni Sheng Wu"},"msg":"Operation succeeded!"}

Input: localhost:8080/jsonresult/list, and return json as follows:

{"code":"0","data":[{"id":1,"password":"123456","username":"Ni Sheng Wu"},{"id":2,"password":"123456","username":"Master class"}],"msg":"Get user list succeeded"}

Enter: localhost:8080/jsonresult/map. The returned json is as follows:

{"code":"0","data":{"Author information":{"id":1,"password":"","username":"Ni Sheng Wu"},"CSDN address":null,"Number of fans":4153,"Blog address":"http://blog.itcodai.com"},"msg":"Operation succeeded!"}

Through encapsulation, we not only pass the data to the front-end or other interfaces through json, but also bring the status code and prompt information, which is widely used in the actual project scenarios.

4. summary

This section mainly analyzes the return of json data in Spring Boot in detail, from the default jackson framework of Spring Boot to the fastJson framework of Alibaba, and explains their configuration respectively. In addition, combined with the actual project situation, summarizes the json encapsulation structure used in the actual project, adds the status code and prompt information, making the returned json data information more complete.
Course source code download address: Poke me downloading

Lesson 03: Spring Boot uses slf4j for logging

In development, we often use System.out.println() to print some information, but this is not good, because a large number of use of System.out will increase the consumption of resources. In our actual project, we use slf4j's logback to output logs, which is very efficient. Spring Boot provides a set of log system, and logback is the best choice.

1. slf4j introduction

Quote from Baidu Encyclopedia:

SLF4J, or Simple Logging Facade for Java, is not a specific logging solution, but only serves a variety of logging systems. According to the official statement, SLF4J is a simple Facade for the log system, allowing end users to use the desired log system when deploying their applications.

The general meaning of this paragraph is: you only need to write the code to record the logs in a unified way, and you don't need to care about which log system and style the logs are output through. Because they depend on the log system that is bound when the project is deployed. For example, if slf4j is used to record logs in the project and log4j is bound (that is, import the corresponding dependencies), the logs will be output in the style of log4j; later, the log needs to be output in the style of logback, and only log4j needs to be replaced with logback, without modifying the code in the project. This is almost zero learning cost for different log systems introduced by third-party components, and its advantages are not only this one, but also simple use of placeholders and log level judgment.

Because sfl4j has so many advantages, Alibaba has taken slf4j as its log framework. In the Alibaba Java Development Manual (official edition), the first article of the log specification requires the use of slf4j:

1. The API in Log4j and Logback should not be used directly in the [mandatory] application, but the API in SLF4J should be used. Using the facade mode log framework is conducive to the unification of maintenance and log processing methods of all classes.

The word "mandatory" shows the advantages of slf4j, so it is recommended to use slf4j as its own log framework in actual projects. It's very simple to use slf4j to record logs. You can directly use LoggerFactory to create logs.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Test {
    private static final Logger logger = LoggerFactory.getLogger(Test.class);
    // ......
}

2. Log configuration in application.yml

Spring Boot supports slf4j very well. Slf4j has been integrated internally. Generally, we will configure slf4j when we use it. The application.yml file is the only one that needs to be configured in Spring Boot. It was the application.properties file when the project was created at the beginning. The YML file is more detailed for individuals because it has a good sense of hierarchy and looks more intuitive. However, the format of the YML file is relatively high. For example, there must be a space after the English colon, otherwise, the project evaluation The meter can not be started, and no error is reported. Whether to use properties or YML depends on personal habits. This course uses YML.

Let's take a look at the log configuration in the application.yml file:

logging:
  config: logback.xml
  level:
    com.itcodai.course03.dao: trace

logging.config is used to specify which configuration file to read when the project is started. Here, it is specified that the log configuration file is the logback.xml file under the root path. The relevant configuration information about the log is put in the logback.xml file. logging.level is used to specify the output level of the log in the mapper. The above configuration indicates that the output level of all mapper logs under the com.itcodai.course03.dao package is trace, which will print out the sql of the operation database. When developing, it is set to trace to facilitate problem location. In the production environment, the log level can be set to error level again (this lesson The mapper layer will not be discussed in detail later when Spring Boot integrates MyBatis).

The commonly used log levels are ERROR, WARN, INFO and DEBUG.

3. logback.xml configuration file analysis

In the application.yml file above, we specified the log configuration file logback.xml, which is mainly used for log related configuration. In logback.xml, we can define the format, path, console output format, file size, saving time and so on. Let's analyze:

3.1 define log output format and storage path

<configuration>
	<property name="LOG_PATTERN" value="%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n" />
	<property name="FILE_PATH" value="D:/logs/course03/demo.%d{yyyy-MM-dd}.%i.log" />
</configuration>

Let's take a look at the meaning of this definition: first, define a format named "LOG_PATTERN", in which% date represents the date, thread represents the thread name, and% - 5level represents the five character width of the level from the left, logger{36} represents the longest 36 characters of the logger name, msg represents the log message, and n represents the newline character.

Then define the file path named "file path", under which the logs will be stored. %i is the ith file. When the log file reaches the specified size, the log will be generated into a new file. Here i is the file index. The allowed size of the log file can be set, which will be explained below. It should be noted that the path of log storage must be absolute in both windows and Linux systems.

3.2 define console output

<configuration>
	<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
		<encoder>
            <!-- As configured above LOG_PATTERN To print logs -->
			<pattern>${LOG_PATTERN}</pattern>
		</encoder>
	</appender>
</configuration>

Use the < appender > node to set the configuration of CONSOLE output (class="ch.qos.logback.core.ConsoleAppender"), defined as "CONSOLE". Use the log ﹣ pattern defined above to output, and use ${} to reference in.

3.3 define relevant parameters of log file

<configuration>
	<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<!-- As configured above FILE_PATH Path to save the log -->
			<fileNamePattern>${FILE_PATH}</fileNamePattern>
			<!-- Keep the log for 15 days -->
			<maxHistory>15</maxHistory>
			<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
				<!-- The maximum of a single log file. If it exceeds the maximum, a new log file store will be created -->
				<maxFileSize>10MB</maxFileSize>
			</timeBasedFileNamingAndTriggeringPolicy>
		</rollingPolicy>

		<encoder>
			<!-- As configured above LOG_PATTERN To print logs -->
			<pattern>${LOG_PATTERN}</pattern>
		</encoder>
	</appender>
</configuration>

Use < appender > to define a FILE configuration named "FILE", which is mainly used to configure the saving time of log files, the size of single log FILE storage, the path of FILE saving and the output format of log.

3.4 define log output level

<configuration>
	<logger name="com.itcodai.course03" level="INFO" />
	<root level="INFO">
		<appender-ref ref="CONSOLE" />
		<appender-ref ref="FILE" />
	</root>
</configuration>

With the above definitions, we use < logger > to define the default log output level in the project. Here, we define the level as INFO. Then, for the INFO level log, we use < root > to refer to the parameters of the console log output and log file defined above. This completes the configuration in the logback.xml file.

4. Use Logger to print logs in the project

In the code, we usually use the Logger object to print out some log information. You can specify the level of the printed log and support placeholders, which is very convenient.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/test")
public class TestController {

    private final static Logger logger = LoggerFactory.getLogger(TestController.class);

    @RequestMapping("/log")
    public String testLog() {
        logger.debug("=====Test log debug Level printing====");
        logger.info("======Test log info Level printing=====");
        logger.error("=====Test log error Level printing====");
        logger.warn("======Test log warn Level printing=====");

        // You can use placeholders to print out some parameter information
        String str1 = "blog.itcodai.com";
        String str2 = "blog.csdn.net/eson_15";
        logger.info("======Ni Shengwu's personal blog:{};Ni Shengwu CSDN Blog:{}", str1, str2);

        return "success";
    }
}

Start the project, enter localhost:8080/test/log in the browser, and you can see the log record of the console:

==Test log info level printing=
=Test log error level printing
==Test log warn level printing=
======Ni Shengwu's personal blog: blog.itcodai.com; Ni Shengwu's CSDN blog: blog.csdn.net/eson_15

Because the INFO level is higher than the DEBUG level, DEBUG is not printed out. If the log level in logback.xml is set to DEBUG, all four statements will be printed out, and you will test it yourself. At the same time, you can open the D:\logs\course03 \ directory, which contains all the log records generated after the project was started. After the project is deployed, most of us locate the problem by looking at the log file.

5. summary

This lesson mainly introduces slf4j, and explains in detail how to use slf4j to output log in Spring Boot. It focuses on the configuration of log related information in logback.xml file, including the different levels of log. Finally, for these configurations, use the Logger to print out some of them for testing. In the actual project, these logs are very important information in the process of troubleshooting.
Course source code download address: Poke me downloading

Lesson 04: project property configuration in Spring Boot

We know that in a project, configuration information needs to be used in many times. This information may have different configurations in the test environment and the production environment. Later, it may be modified according to the actual business situation. In this case, we can't write these configurations in the code. It's better to write them in the configuration file. For example, you can write this information to the application.yml file.

1. A small amount of configuration information

For example, in the microservice architecture, the most common thing is that a service needs to call other services to obtain the relevant information provided by it. Then the service address to be called needs to be configured in the service configuration file. For example, in the current service, we need to call the order microservice to obtain the order related information. Assuming the port number of the order service is 8002, we can To configure as follows:

server:
  port: 8001

# Configure the address of the microservice
url:
  # Address of order microservice
  orderUrl: http://localhost:8002

Then how to get the configured order service address in the business code? We can use @ Value annotation to solve this problem. Add a property to the corresponding class, and use @ Value annotation on the property to get the configuration information in the configuration file, as follows:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/test")
public class ConfigController {

    private static final Logger LOGGER = LoggerFactory.getLogger(ConfigController.class);

    @Value("${url.orderUrl}")
    private String orderUrl;
    
    @RequestMapping("/config")
    public String testConfig() {
        LOGGER.info("=====The order service address obtained is:{}", orderUrl);
        return "success";
    }
}

@The Value value corresponding to the key in the configuration file can be obtained through ${key} on the Value annotation. Let's start the following project. After entering localhost:8080/test/config in the browser to request the service, you can see that the console will print out the address of the order service:

=====The order service address obtained is: http://localhost:8002

This indicates that we have successfully obtained the order microservice address in the configuration file, which is also used in the actual project. Later, if the address of a service needs to be modified due to the deployment of the server, we only need to modify it in the configuration file.

2. Multiple configuration information

Here is another extension. With the increase of business complexity, there may be more and more microservices in a project. A module may need to call multiple microservices to obtain different information, so it is necessary to configure the addresses of multiple microservices in the configuration file. However, in the code that needs to call these microservices, if you use @ Value annotation to introduce corresponding microservice addresses one by one, it is too cumbersome and unscientific.

Therefore, in the actual project, when the business is tedious and the logic is complex, one or more configuration classes need to be encapsulated. For example: if in the current service, a business needs to call order micro service, user micro service and shopping cart micro service at the same time to obtain the relevant information of order, user and shopping cart respectively, and then do some logical processing for these information. In the configuration file, we need to configure the addresses of these microservices:

# Configure addresses for multiple microservices
url:
  # Address of order microservice
  orderUrl: http://localhost:8002
  # Address of user's microservice
  userUrl: http://localhost:8003
  # Address of shopping cart microservice
  shoppingUrl: http://localhost:8004

Maybe in the actual business, there are more than three microservices, or even a dozen. In this case, we can define a MicroServiceUrl class to specifically store the url of the microservice, as follows:

@Component
@ConfigurationProperties(prefix = "url")
public class MicroServiceUrl {

    private String orderUrl;
    private String userUrl;
    private String shoppingUrl;
    // Omit get and set methods
}

Careful friends can see that you can use @ ConfigurationProperties annotation and prefix to specify a prefix, and then the property name in this class is the name after the prefix is removed from the configuration, one-to-one correspondence. Namely: prefix name + attribute name is the key defined in the configuration file. At the same time, the class needs to be annotated with @Component, put the class as a component in the Spring container, and let Spring manage it. When we use it, we can inject it directly.

Note that using the @ ConfigurationProperties annotation requires importing its dependencies:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-configuration-processor</artifactId>
	<optional>true</optional>
</dependency>

OK, so far, we have written the configuration. Next, write a Controller to test it. At this time, you don't need to introduce these microservices' URLs in the code one by one. You can directly inject the newly written configuration class into the code through @ Resource annotation, which is very convenient. As follows:

@RestController
@RequestMapping("/test")
public class TestController {

    private static final Logger LOGGER = LoggerFactory.getLogger(TestController.class);

    @Resource
    private MicroServiceUrl microServiceUrl;
    
    @RequestMapping("/config")
    public String testConfig() {
        LOGGER.info("=====The order service address obtained is:{}", microServiceUrl.getOrderUrl());
        LOGGER.info("=====The obtained user service address is:{}", microServiceUrl.getUserUrl());
        LOGGER.info("=====The shopping cart service address obtained is:{}", microServiceUrl.getShoppingUrl());

        return "success";
    }
}

Start the project again, and you can see the following information printed out by the console, indicating that the configuration file is effective and the content of the configuration file is acquired correctly:

=====The order service address obtained is: http://localhost:8002
 =====The order service address obtained is: http://localhost:8002
 =====The obtained user service address is: http://localhost:8003
 =====The shopping cart service address obtained is: http://localhost:8004

3. Specify project profile

As we know, there are generally two environments in actual projects: development environment and production environment. The configuration in development environment is often different from that in production environment, such as environment, port, database, related address, etc. After debugging the development environment and deploying it to the production environment, it is impossible for us to change all the configuration information to the configuration on the production environment. This is too cumbersome and unscientific.

The best solution is that both the development environment and the production environment have a set of configuration information for use. Then when we are developing, we specify to read the configuration of the development environment. When we deploy the project to the server, we specify to read the configuration of the production environment.

We create two new configuration files: application-dev.yml and application-pro.yml, which are respectively used to configure the development environment and production environment. For convenience, we set two access port numbers, 8001 for development environment and 8002 for production environment

# Development environment profile
server:
  port: 8001
# Development environment profile
server:
  port: 8002

Then specify which configuration file to read in the application.yml file. For example, in the development environment, we specify to read the applicationn-dev.yml file, as follows:

spring:
  profiles:
    active:
    - dev

In this way, you can specify to read the application-dev.yml file during development, and use port 8001 for access. After deployment to the server, you only need to change the file specified in application.yml to application-pro.yml, and then use port 8002 for access, which is very convenient.

4. summary

This lesson mainly explains how to read the relevant configurations in the business code in Spring Boot, including a single configuration and multiple configuration items. In microservices, this situation is very common, and many other microservices need to be called, so it is a good way to encapsulate a configuration class to receive these configurations. In addition, for example, database related connection parameters can also be put into a configuration class. Other similar scenarios can be handled in this way. At last, it introduces the fast switch mode between development environment and production environment configuration, which saves the modification of configuration information during project deployment.
Course source code download address: Poke me downloading

Lesson 05: MVC support in Spring Boot

Spring Boot's MVC support mainly introduces the most commonly used annotations in actual projects, including @ RestController, @ RequestMapping, @ PathVariable, @ RequestParam and @ RequestBody. This paper mainly introduces the common usage and characteristics of these notes.

1. @RestController

@RestController is a new annotation of Spring Boot. Let's see what it contains.

@Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Controller
@ResponseBody
public @interface RestController {
    String value() default "";
}

It can be seen that the @ responsecontroller annotation contains the original @ Controller and @ ResponseBody annotations. Friends who have used Spring have known the @ Controller annotation very well. Here, we will not repeat it again. The @ ResponseBody annotation is to convert the returned data structure into JSON format. So @ RestController can be regarded as a combination of @ Controller and @ ResponseBody, which is equivalent to stealing a lazy mind. After we use @ RestController, we don't need to use @ Controller anymore. But we need to pay attention to one problem: if it is front-end and back-end separation and template rendering is not needed, for example, Thymeleaf, in this case, we can directly use @ RestController to transfer data to the front-end in JSON format, and the front-end can get it and then parse it; but if it is not front-end and back-end separation and template rendering is needed, the general Controller will return to the specific page, Then @ RestController cannot be used at this time, for example:

public String getUser() {
	return "user";
}

In fact, you need to return to the user.html page. If you use @ RestController, you will return user as a string, so we need to use @ Controller annotation at this time. This will be explained in the next section of the Spring Boot integration Thymeleaf template engine.

2. @RequestMapping

@RequestMapping is an annotation used to process request address mapping, which can be used on classes or methods. Annotation at the class level will map a specific request or request pattern to a controller, indicating that all methods in the class responding to the request take this address as the parent path; at the method level, it indicates the mapping relationship further specified to the processing method.

This annotation has six attributes, generally three of which are commonly used in projects: value, method and products.

  • value attribute: Specifies the actual address of the request. value can be omitted
  • method attribute: Specifies the type of request, mainly GET, PUT, POST, DELETE. The default is GET
  • Produces property: Specifies the return content type, such as produces = "application/json; charset=UTF-8"

@The RequestMapping annotation is relatively simple, for example:

@RestController
@RequestMapping(value = "/test", produces = "application/json; charset=UTF-8")
public class TestController {

    @RequestMapping(value = "/get", method = RequestMethod.GET)
    public String testGet() {
        return "success";
    }
}

This is very simple. Start the project and type localhost:8080/test/get in the browser to test it.

There are corresponding annotations for the four different request methods. You don't need to add the method attribute to the @ RequestMapping annotation every time. The above GET method requests can use the @ GetMapping("/get") annotation directly, with the same effect. Accordingly, the annotations corresponding to PUT mode, POST mode and DELETE mode are @ PutMapping, @ PostMapping and DeleteMapping respectively.

3. @PathVariable

@The PathVariable annotation is mainly used to obtain url parameters. Spring Boot supports restful style URLs. For example, a GET request carries a parameter id. we receive the id as a parameter and can use @ PathVariable annotation. As follows:

@GetMapping("/user/{id}")
public String testPathVariable(@PathVariable Integer id) {
	System.out.println("Acquired id For:" + id);
	return "success";
}

We need to pay attention to a problem here. If we want the id value in the placeholder in the url to be assigned directly to the parameter id, we need to ensure that the parameters in the url are consistent with the parameters received by the method, otherwise we cannot receive them. If it is inconsistent, it can also be solved. You need to use the value attribute in @ PathVariable to specify the corresponding relationship. As follows:

@RequestMapping("/user/{idd}")
public String testPathVariable(@PathVariable(value = "idd") Integer id) {
	System.out.println("Acquired id For:" + id);
	return "success";
}

For the url to be accessed, the location of the placeholder can be anywhere, not necessarily at the end, for example, / xxx/{id}/user. In addition, url also supports multiple placeholders. Method parameters are received with the same number of parameters. The principle is the same as a parameter, for example:

@GetMapping("/user/{idd}/{name}")
    public String testPathVariable(@PathVariable(value = "idd") Integer id, @PathVariable String name) {
        System.out.println("Acquired id For:" + id);
        System.out.println("Acquired name For:" + name);
        return "success";
    }

Run the project and request localhost:8080/test/user/2/zhangsan in the browser. You can see the output of the console as follows:

The id obtained is: 2
 The name obtained is: zhangsan

Therefore, multiple parameters can be received. Similarly, if the parameter name in the url is different from that in the method, you need to use the value attribute to bind the two parameters.

4. @RequestParam

@As the name implies, the RequestParam annotation is also used to obtain request parameters. As we mentioned above, the @ pathvalue annotation is also used to obtain request parameters. What's the difference between @ RequestParam and @ PathVariable? The main difference is that @ PathValiable gets the parameter value from the url template, that is, the url of this style: http://localhost:8080/user/{id}; while @ RequestParam gets the parameter value from the request, that is, the url of this style: http://localhost:8080/user?id=1. We use the url with the parameter id to test the following code:

@GetMapping("/user")
public String testRequestParam(@RequestParam Integer id) {
	System.out.println("Acquired id For:" + id);
	return "success";
}

The id information can be printed out from the console normally. Similarly, the parameters above the url and the parameters of the method need to be consistent. If they are inconsistent, the value attribute should also be used to explain them. For example, the url is: http://localhost:8080/user?idd=1

@RequestMapping("/user")
public String testRequestParam(@RequestParam(value = "idd", required = false) Integer id) {
	System.out.println("Acquired id For:" + id);
	return "success";
}

In addition to the value attribute, there are two more commonly used attributes:

  • required attribute: true indicates that the parameter must be passed, otherwise a 404 error will be reported, and false indicates that it is optional.
  • defaultValue property: the default value, which means the default value if there is no parameter with the same name in the request.

As can be seen from the url, when the @ RequestParam annotation is used for GET requests, it receives the parameters spliced in the url. In addition, the annotation can also be used for POST requests to receive the parameters submitted by the front-end forms. If the front-end submits two parameters, username and password, we can use @ RequestParam to receive them. The usage is the same as above.

@PostMapping("/form1")
    public String testForm(@RequestParam String username, @RequestParam String password) {
        System.out.println("Acquired username For:" + username);
        System.out.println("Acquired password For:" + password);
        return "success";
    }

We use postman to simulate the form submission and test the following interface:

So the problem is, if there is a lot of data in the form, we can't write many parameters in the background method, and each parameter needs @ RequestParam annotation. In this case, we need to encapsulate an entity class to receive these parameters. The attribute name in the entity is consistent with the parameter name in the form.

public class User {
	private String username;
	private String password;
	// set get
}

If we use entity receiving, we can't add @ RequestParam annotation in front. Just use it directly.

@PostMapping("/form2")
    public String testForm(User user) {
        System.out.println("Acquired username For:" + user.getUsername());
        System.out.println("Acquired password For:" + user.getPassword());
        return "success";
    }

Use postman to test the form submission again, and observe the return value and the log printed out by the console. In a real project, an entity class is usually encapsulated to receive form data, because there are many form data in the real project.

5. @RequestBody

@The RequestBody annotation is used to receive the entities from the front end, and the receiving parameters are also the corresponding entities. For example, the front end submits the two parameters username and password through json. At this time, we need to encapsulate an entity in the back end to receive. When there are many parameters passed, it is very convenient to receive @ RequestBody. For example:

public class User {
	private String username;
	private String password;
	// set get
}
@PostMapping("/user")
public String testRequestBody(@RequestBody User user) {
	System.out.println("Acquired username For:" + user.getUsername());
	System.out.println("Acquired password For:" + user.getPassword());
	return "success";
}

We use the postman tool to test the effect, open the postman, and enter the request address and parameters. We use json to simulate the parameters, as shown in the following figure. After calling, we return success.

Also look at the log output from the background console:

The username obtained is: Ni Shengwu
 The obtained password is: 123456

As you can see, @ RequestBody annotation is used for POST requests to receive json entity parameters. It is similar to the form submission we introduced above, except that the format of parameters is different. One is json entity, the other is form submission. In the actual project, the corresponding annotation can be used according to the specific scene and needs.

6. summary

This lesson mainly explains the support of MVC in Spring Boot, analyzes the usage of @ RestController, @ RequestMapping, @ PathVariable, @ RequestParam and @ RequestBody. Because @ ResponseBody is integrated in @ RestController, the annotation of returning json will not be described in detail. The above four notes are frequently used notes, which are basically encountered in all practical projects, and should be mastered.

Course source code download address: Poke me downloading

Lesson 06: Spring Boot integration Swagger2 presents online interface documents

1. About swagger

1.1 problems solved

With the development of Internet technology, the current website architecture has basically changed from the original back-end rendering to the form of front-end and back-end separation, and the front-end technology and back-end technology go further and further in their respective paths. The only connection between the front end and the back end becomes the API interface, so the API document becomes the link between the front end and the back end developers, and becomes more and more important.

So the problem is coming. With the continuous updating of the code, developers are often hard to update the documents after developing new interfaces or updating old interfaces due to the heavy task of development. Swagger is an important tool to solve this problem. For users of interfaces, developers do not need to provide them with documents, just tell them one The address of swagger can display the online API interface documents. In addition, the person calling the interface can also test the interface data online. Similarly, when developing the interface, the developer can also use swagger online interface documents to test the interface data, which provides convenience for the developer.

1.2 official swagger

We open Swagger official website , the official definition of Swagger is:

The Best APIs are Built with Swagger Tools

Translated into Chinese: "the best API is built using Swagger tools.". It can be seen that Swagger officials are very confident about its function and position. Because it is very easy to use, the official positioning of Swagger is also reasonable. As shown in the figure below:

This article focuses on how to import Swagger2 tools in Spring Boot to present the interface documents in the project. The Swagger version used in this lesson is 2.2.2. Let's take a tour of Swagger2.

2. Swegger2's maven dependence

To use swagger 2 tool, you must import maven dependency. The current official maximum version is 2.8.0. I tried it. I felt that the effect of page display is not good, and it is not compact enough, which is not conducive to operation. In addition, the latest version is not necessarily the most stable version. Currently, we use version 2.2.2 in our actual project, which is stable and friendly. Therefore, this lesson focuses on version 2.2.2 and depends on the following:

<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.2.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.2.2</version>
</dependency>

3. Configuration of swagger2

Swagger2 needs to be configured. It is very convenient to configure swagger2 in Spring Boot. Create a new Configuration class. In addition to the necessary @ Configuration annotation, the @ enableswager2 annotation needs to be added to the Configuration class of swagger2.

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import springfox.documentation.builders.ApiInfoBuilder;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;

/**
 * @author shengwu ni
 */
@Configuration
@EnableSwagger2
public class SwaggerConfig {

    @Bean
    public Docket createRestApi() {
        return new Docket(DocumentationType.SWAGGER_2)
                // Specify how to build the details of the api document: apiInfo()
                .apiInfo(apiInfo())
                .select()
                // Specify the package path to generate the api interface. Here, take the controller as the package path to generate all interfaces in the controller
                .apis(RequestHandlerSelectors.basePackage("com.itcodai.course06.controller"))
                .paths(PathSelectors.any())
                .build();
    }

    /**
     * Build api documentation details
     * @return
     */
    private ApiInfo apiInfo() {
        return new ApiInfoBuilder()
                // Set page title
                .title("Spring Boot Integrate Swagger2 Interface Overview")
                // Set interface description
                .description("Learn with brother Wu Spring Boot Lesson 06")
                // Set contact
                .contact("Ni Shengwu," + "CSDN: http://blog.csdn.net/eson_15")
                // Set version
                .version("1.0")
                // structure
                .build();
    }
}

In this configuration class, the function of each method has been explained in detail with comments, which will not be discussed here. So far, we've configured Swagger2. Now we can test whether the configuration is effective or not. Start the project, enter localhost:8080/swagger-ui.html in the browser, and you can see the interface page of Swagger2, as shown in the figure below, indicating that Swagger2 integration is successful.

Combined with the figure, you can clearly know the function of each method in the configuration class by comparing the configuration in the Swagger2 configuration file above. In this way, it is easy to understand and master the configuration in Swagger2. It can also be seen that the configuration of Swagger2 is very simple.

[friendly tips] many friends may encounter the following situations when configuring Swagger, which can't be turned off. This is caused by the browser cache. Clear the browser cache to solve the problem.

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-k3UNsYXM-1581914323088)(http://p99jlm9k5.bkt.clouddn.com/blog/images/1/error.png))

4. Use of swagger2

We have configured Swagger2 and started the test. The function is normal. Now we start to use Swagger2, mainly to introduce some common annotations in Swagger2, such as entity class, Controller class and method in Controller. At last, we look at Swagger2 How to present the online interface document on the page, and test the data in the interface with the method in the Controller.

4.1 entity class notes

In this section, we build a User entity class, mainly introducing the @ ApiModel and @ ApiModelProperty annotations in Swagger2, and preparing for later tests.

import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;

@ApiModel(value = "User entity class")
public class User {

    @ApiModelProperty(value = "User unique identification")
    private Long id;

    @ApiModelProperty(value = "User name")
    private String username;

    @ApiModelProperty(value = "User password")
    private String password;

	// Omit set and get methods
}

Explain @ ApiModel and @ ApiModelProperty notes:

@ApiModel annotation is used for entity class, which means to describe the class and to receive parameters with entity class.
@The ApiModelProperty annotation is used for properties in a class to indicate a description or data operation change to a model property.

The specific effect of this annotation in the online API documentation is described below.

4.2 related notes in controller class

Let's write a TestController, write several interfaces, and then learn the notes related to Swagger2 in the Controller.

import com.itcodai.course06.entiy.JsonResult;
import com.itcodai.course06.entiy.User;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiParam;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/swagger")
@Api(value = "Swagger2 Online interface documentation")
public class TestController {

    @GetMapping("/get/{id}")
    @ApiOperation(value = "Get user information according to user unique ID")
    public JsonResult<User> getUserInfo(@PathVariable @ApiParam(value = "User unique identification") Long id) {
        // Get User information according to id in simulation database
        User user = new User(id, "Ni Sheng Wu", "123456");
        return new JsonResult(user);
    }
}

Let's learn about @ Api, @ ApiOperation and @ ApiParam annotations.

@Api annotation is used on a class to indicate that the class is a swagger resource.
@The ApiOperation annotation is used for methods to represent an http requested operation.
@ApiParam annotation is used on parameters to indicate parameter information.

Here is the JsonResult, which is the entity encapsulated when learning to return json data in lesson 02. The above are the five most commonly used annotations in Swagger. Next, run the project project, and enter localhost:8080/swagger-ui.html in the browser to see the interface status of Swagger page.

It can be seen that the Swagger page is very comprehensive in displaying the information of the interface. The function of each annotation and the place where it is displayed are indicated in the figure above. All the information of the interface can be known through the page. Then, we can directly test the information returned by the interface Online, enter the id as 1, and see the returned data:

It can be seen that the data in json format is returned directly on the page, and developers can directly use the online interface to test whether the data is correct or not, which is very convenient. The above is for the input of a single parameter. If the input parameter is an object, what does the Swagger look like? Let's write another interface.

@PostMapping("/insert")
    @ApiOperation(value = "Add user information")
    public JsonResult<Void> insertUser(@RequestBody @ApiParam(value = "User information") User user) {
        // Process add logic
        return new JsonResult<>();
    }

Restart the project, enter localhost:8080/swagger-ui.html in the browser to see the effect:

5. summary

OK, this lesson analyzes the advantages of Swagger in detail, as well as how Spring Boot integrates Swagger 2, including the explanation of configuration and related notes, involving entity classes and interface classes, and how to use them. Finally, through the page test, I experienced the power of Swagger, which is basically one of the necessary tools in each project group, so it is not difficult to master the use of this tool.

Course source code download address: Poke me downloading

Lesson 07: Spring Boot integrates the tymeleaf template engine

1. Introduction to thymeleaf

Thymeleaf is a modern server-side Java template engine for Web and standalone environments.
The main goal of Thymeleaf is to bring elegant natural templates to your development workflow - HTML that can be displayed correctly in the browser or used as static prototypes to achieve more powerful collaboration in the development team.

The above translation is from the official website of Thymeleaf. The traditional combination of JSP and JSTL has passed. Tymeleaf is a template engine of modern server. Different from traditional JSP, tymeleaf can be opened directly by browser, because it can ignore the extended attribute, which is equivalent to opening the native page, and brings certain convenience to the front-end personnel.

What do you mean? That is to say, in the local environment or the environment with network, thymeleaf can run. Because thymeleaf supports html prototypes and adds additional attributes to html tags to achieve the display mode of "template + data", artists can directly view the page effect in the browser. When the service is started, background developers can also view the dynamic page effect with data. For example:

<div class="ui right aligned basic segment">
      <div class="ui orange basic label" th:text="${blog.flag}">Static original information</div>
</div>
<h2 class="ui center aligned header" th:text="${blog.title}">This is a static title</h2>

Similar to the above, static information will be displayed in the static page. After the service is started and the data in the database is acquired dynamically, the dynamic data can be displayed. The th:text tag is used to replace the text dynamically, which will be described below. This example shows that when the browser interprets html, the undefined tag attributes (such as th:text) in html will be ignored, so the template of Thymeleaf can run statically; when data is returned to the page, the Thymeleaf tag will dynamically replace the static content, making the page dynamically display data.

2. Dependency import

To use the thymeleaf template in Spring Boot, you need to introduce dependency. You can check thymeleaf when creating a project project, or import it manually after creation, as follows:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>

In addition, if you want to use the thymeleaf template on html pages, you need to introduce:

<html xmlns:th="http://www.thymeleaf.org">

3. Relevant configurations of thymeleaf

Because there is already a default configuration in Thymeleaf, we don't need to do too much configuration for it. One thing we need to pay attention to is that Thymeleaf turns on page caching by default, so we need to turn off this page caching during development. The configuration is as follows.

spring:
  thymeleaf:
    cache: false #Close cache

Otherwise, there will be a cache, resulting in the page can not see the updated effect in time. For example, if you modify a file, it has been updated to tomcat, but the refresh page is still the previous page, which is caused by the cache.

4. Use of thymeleaf

4.1 accessing static pages

This has nothing to do with tymeleaf, it should be said that it is universal. The reason why I write it here together is that when we build websites, we usually make a 404 page and a 500 page, in order to give users a friendly display when there is an error, instead of throwing out a bunch of abnormal information. The 404.html and 500.html files under the templates directory (templates /) are automatically recognized in Spring Boot. We create a new error folder under the templates / directory to specially place the error HTML page, and then print some information separately. Take 404.html as an example:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>
    //This is page 404
</body>
</html>

Let's write another controller to test the 404 and 500 pages:

@Controller
@RequestMapping("/thymeleaf")
public class ThymeleafController {

    @RequestMapping("/test404")
    public String test404() {
        return "index";
    }

    @RequestMapping("/test500")
    public String test500() {
        int i = 1 / 0;
        return "index";
    }
}

When we input localhost:8080/thymeleaf/test400 in the browser, we intentionally input an error and cannot find the corresponding method, so we will jump to 404.html display.
When we enter localhost:8088/thymeleaf/test505 in the browser, an exception will be thrown, and then it will automatically jump to 500.html display.

[note] here's a problem that needs attention. In the previous course, we said that microservices will move towards the separation of front and back ends. We use @ RestController annotation on the Controller layer, which will automatically convert the returned data into json format. However, when using the template engine, the Controller layer cannot annotate with @ RestController, because when using the thymeleaf template, the view file name is returned. For example, the Controller above returns to the index.html page. If @ RestController is used, the index will be parsed as a String and directly returned to the page instead of looking for index.html Page, you can try it. So use @ Controller annotation when using template.

4.2 processing objects in thymeleaf

Let's take a look at how to deal with the object information in the thmeleaf template. If we need to display the related information to the front-end bloggers when we are doing personal blogs, we will encapsulate it as a blogger object, for example:

public class Blogger {
    private Long id;
    private String name;
    private String pass;
	// Omit set and get
}

Then initialize the following in the controller layer:

@GetMapping("/getBlogger")
public String getBlogger(Model model) {
	Blogger blogger = new Blogger(1L, "Ni Sheng Wu", "123456");
	model.addAttribute("blogger", blogger);
	return "blogger";
}

We initialize a blogger object, put it in the Model, and then return to the blogger.html page to render. Next, we write a blogger.html to render the blogger information:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Blogger information</title>
</head>
<body>
<form action="" th:object="${blogger}" >
    User number:<input name="id" th:value="${blogger.id}"/><br>
    User name:<input type="text" name="username" th:value="${blogger.getName()}" /><br>
    Login password:<input type="text" name="password" th:value="*{pass}" />
</form>
</body>
</html>

As you can see, in the thmeleaf template, use th:object = "${}" to get object information, and then there are three ways to get object properties in the form. As follows:

Use th:value = "* {property name}"
Use th:value = "${object. Property name}", which refers to the object obtained using th:object above
Use th:value = "${object. get method}", object refers to the object obtained with th:object above

As you can see, you can write code like java in Thymeleaf, which is very convenient. Enter localhost:8080/thymeleaf/getBlogger in the browser to test the following data:

4.3 List processing in thymeleaf

For List processing, it is similar to the object described above, but it needs to be traversed in thmeleaf. Let's first simulate a List in the Controller.

@GetMapping("/getList")
public String getList(Model model) {
    Blogger blogger1 = new Blogger(1L, "Ni Sheng Wu", "123456");
    Blogger blogger2 = new Blogger(2L, "Master class", "123456");
    List<Blogger> list = new ArrayList<>();
    list.add(blogger1);
    list.add(blogger2);
    model.addAttribute("list", list);
    return "list";
}

Next, we write a list.html to get the list information, and then traverse the list in list.html. As follows:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Blogger information</title>
</head>
<body>
<form action="" th:each="blogger : ${list}" >
    User number:<input name="id" th:value="${blogger.id}"/><br>
    User name:<input type="text" name="password" th:value="${blogger.name}"/><br>
    Login password:<input type="text" name="username" th:value="${blogger.getPass()}"/>
</form>
</body>
</html>

It can be seen that in fact, it is similar to processing the information of a single object. Thymeleaf uses th:each to traverse, ${} takes the parameters passed from the model, and then customizes each object taken from the list. Here it is defined as a blogger. In the form, you can directly use ${object. Property name} to get the property value of the object in the list, or you can use ${object. Get method} to get the property value. This is the same as handling the object information above, but you can't use * {property name} to get the property in the object. The thymeleaf template can't get the property value.

4.4 other commonly used thymeleaf operations

Let's summarize some common label operations in thmeleaf, as follows:

Label function Example
th:value Assign a value to a property <input th:value="${blog.name}" />
th:style Set style th:style="'display:'+@{(${sitrue}?'none':'inline-block')} + ''"
th:onclick Click events th:onclick="'getInfo()'"
th:if Conditional judgement <a th:if="${userId == collect.userId}" >
th:href Hyperlink <a th:href="@{/blogger/login}">Login</a> />
th:unless Condition judgment is opposite to th:if <a th:href="@{/blogger/login}" th:unless=${session.user != null}>Login</a>
th:switch Match th:case <div th:switch="${user.role}">
th:case With th:switch <p th:case="'admin'">administator</p>
th:src Address introduction <img alt="csdn logo" th:src="@{/img/logo.png}" />
th:action Address of form submission <form th:action="@{/blogger/update}">

There are many other uses of tymeleaf, which will not be summarized here. For details, please refer to tymeleaf's Official document (v3.0) . The main thing is to learn how to use thmeleaf in Spring Boot. If you encounter a corresponding tag or method, just consult the official document.

5. summary

Thymeleaf is widely used in Spring Boot. This lesson mainly analyzes the advantages of thymeleaf and how to integrate and use thymeleaf template in Spring Boot, including dependency, configuration, acquisition of relevant data, and some precautions. Finally, it lists some commonly used tags in thmeleaf, which can be mastered by using more in actual projects. Some tags or methods in thmeleaf do not need to memorize and use what to look up. The key is to integrate them in Spring Boot, and make them skillful when using more.

Course source code download address: Poke me downloading

Lesson 08: Global exception handling in Spring Boot

In the process of project development, no matter the operation process of the underlying database, the processing process of the business layer, or the processing process of the control layer, it is inevitable to encounter a variety of predictable and unpredictable exceptions that need to be handled. If each process is handled separately, the code coupling degree of the system will become very high. In addition, the development workload will increase and not be uniform, which also increases the code maintenance cost.
In view of this practical situation, we need to decouple all kinds of exception handling from each processing process, which not only ensures the single function of the relevant processing process, but also realizes the unified processing and maintenance of exception information. At the same time, we do not want to throw the exception to the user directly. We should handle the exception, encapsulate the error information, and then return a friendly information to the user. This section mainly summarizes how to use Spring Boot to intercept and handle global exceptions in the project.

1. Define the unified json structure returned

When the front-end or other services request the interface of the service, the interface needs to return the corresponding json data. Generally, the service only needs to return the required parameters, but in the actual project, we need to encapsulate more information, such as status code, related information msg, etc., on the one hand, there can be a unified return structure in the project, the whole project Groups are applicable. On the other hand, it is convenient to combine global exception handling information. In exception handling information, we usually need to feed back the status code and exception content to the caller.
This unified json structure can be referred to Lesson 02: Spring Boot returns JSON data and data encapsulation In this section, we simplify the content of the unified json structure encapsulated in. Only the status code and exception information msg are retained. As follows:

public class JsonResult {
    /**
     * Exception code
     */
    protected String code;

    /**
     * Abnormal information
     */
    protected String msg;
	
    public JsonResult() {
        this.code = "200";
        this.msg = "Successful operation";
    }
    
    public JsonResult(String code, String msg) {
        this.code = code;
        this.msg = msg;
    }
	// get set
}

2. Handle system exceptions

Create a new GlobalExceptionHandler global exception handling class, and then add the @ controlleradvise annotation to intercept the exceptions thrown in the project, as follows:

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {
	// Print log
    private static final Logger logger = LoggerFactory.getLogger(GlobalExceptionHandler.class);
    // ......
}

Click the @ controlleradvise annotation to see that the @ controlleradvise annotation contains the @ Component annotation, which indicates that when Spring Boot is started, the class will also be handed over to Spring as a component for management. In addition, the annotation also has a basePackages property, which is used to intercept exception information in which package. Generally, we do not specify this property, we intercept all exceptions in the project project. @The ResponseBody annotation is used to output a json format encapsulation data to the caller after exception handling.
How to use it in a project? Spring Boot is very simple. On the method, @ ExceptionHandler annotation is used to specify the specific exception, and then the exception information is processed in the method. Finally, the result is returned to the caller through the unified json structure. Here are a few examples of how to use it.

2.1 processing parameter missing exception

In the architecture of front-end and back-end separation, the front-end requests the back-end interfaces are called through the rest style. Sometimes, for example, POST requests need to carry some parameters, but sometimes the parameters will be missed. In addition, in the microservice architecture, this may happen when the interface call between multiple microservices is involved. At this time, we need to define a method to handle the exception of missing parameters to prompt a friendly message to the front end or caller.

When the parameter is missing, an HttpMessageNotReadableException will be thrown. We can intercept the exception and do a friendly processing as follows:

/**
* Missing request parameter exception
* @param ex HttpMessageNotReadableException
* @return
*/
@ExceptionHandler(MissingServletRequestParameterException.class)
@ResponseStatus(value = HttpStatus.BAD_REQUEST)
public JsonResult handleHttpMessageNotReadableException(
    MissingServletRequestParameterException ex) {
    logger.error("Missing request parameter,{}", ex.getMessage());
    return new JsonResult("400", "Missing required request parameters");
}

Let's write a simple Controller to test the exception and receive two parameters through POST request: name and password.

@RestController
@RequestMapping("/exception")
public class ExceptionController {

    private static final Logger logger = LoggerFactory.getLogger(ExceptionController.class);

    @PostMapping("/test")
    public JsonResult test(@RequestParam("name") String name,
                           @RequestParam("pass") String pass) {
        logger.info("name: {}", name);
        logger.info("pass: {}", pass);
        return new JsonResult();
    }
}

Then use Postman to call the interface. When calling, only the name is passed, and no password is passed, and an exception of missing parameters will be thrown. After the exception is caught, it will enter the logic we wrote and return a friendly message to the caller, as follows:

2.2 handling null pointer exceptions

Null pointer exception is a common thing in development. What are the common places?
First, let's talk about some important points. For example, in microservices, other services are often called to get data. This data is mainly in json format, but in the process of parsing json, it may be free. So when we get a jsonObject, and then get relevant information through the jsonObject, we should make a non empty judgment first.
There is also a very common place to query data from the database. Whether a query record is encapsulated in an object or multiple records are encapsulated in a List, we have to deal with the data next, so there may be a null pointer exception, because no one can guarantee that what is found from the database must not be empty, so we are using When making data, you must first make a non empty judgment.
The handling of null pointer exception is very simple. Like the above logic, you can replace the exception information. As follows:

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = LoggerFactory.getLogger(GlobalExceptionHandler.class);

    /**
     * Null pointer exception
     * @param ex NullPointerException
     * @return
     */
    @ExceptionHandler(NullPointerException.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleTypeMismatchException(NullPointerException ex) {
        logger.error("Null pointer exception,{}", ex.getMessage());
        return new JsonResult("500", "Null pointer is abnormal");
    }
}

I will not test this. The ExceptionController in the code has a testNullPointException method, which simulates a null pointer exception. We can see the returned information by requesting the corresponding url in the browser:

{"code":"500","msg":"Null pointer is abnormal"}

2.3 once and for all?

Of course, there are many exceptions, such as runtime Exception, database query or operation Exception and so on. Because the Exception exception is the parent class, all exceptions will inherit the Exception, so we can directly intercept the Exception exception, once and for all:

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = LoggerFactory.getLogger(GlobalExceptionHandler.class);
    /**
     * Unexpected system exception
     * @param ex
     * @return
     */
    @ExceptionHandler(Exception.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleUnexpectedServer(Exception ex) {
        logger.error("System exception:", ex);
        return new JsonResult("500", "System exception, please contact administrator");
    }
}

But in the project, we will generally intercept some common exceptions in detail. Although intercepting exceptions can be done once and for all, it is not conducive for us to troubleshoot or locate problems. In the actual project, you can write the intercepting Exception exception at the bottom of the globalexception handler. If none is found, you can intercept the Exception exception at last to ensure the output information is friendly.

3. Block custom exceptions

In the actual project, in addition to intercepting some system exceptions, we need to customize some business exceptions in some businesses. For example, in microservices, the mutual calls between services are trivial and common. In order to process the call of a service, the call may fail or timeout, etc. at this time, we need to customize an exception, throw the exception when the call fails, and catch it for the GlobalExceptionHandler.

3.1 define exception information

Because there are many exceptions in the business, and the prompt information may be different for different businesses, in order to facilitate the project exception information management, we will generally define an exception information enumeration class. For example:

/**
 * Business exception prompt information enumeration class
 * @author shengwu ni
 */
public enum BusinessMsgEnum {
    /** Parameter exception */
    PARMETER_EXCEPTION("102", "Parameter exception!"),
    /** Waiting for timeout */
    SERVICE_TIME_OUT("103", "Service call timeout!"),
    /** Too big parameter */
    PARMETER_BIG_EXCEPTION("102", "The number of pictures entered cannot exceed 50!"),
    /** 500 : Once and for all tips can also be defined here */
    UNEXPECTED_EXCEPTION("500", "System exception, please contact administrator!");
    // You can also define more business exceptions

    /**
     * Message code
     */
    private String code;
    /**
     * Message content
     */
    private String msg;

    private BusinessMsgEnum(String code, String msg) {
        this.code = code;
        this.msg = msg;
    }
	// set get method
}

3.2 blocking custom exceptions

Then we can define a business exception. When a business exception occurs, we can just throw the custom business exception. For example, we define a BusinessErrorException exception as follows:

/**
 * Custom business exception
 * @author shengwu ni
 */
public class BusinessErrorException extends RuntimeException {
    
    private static final long serialVersionUID = -7480022450501760611L;

    /**
     * Exception code
     */
    private String code;
    /**
     * Exception message
     */
    private String message;

    public BusinessErrorException(BusinessMsgEnum businessMsgEnum) {
        this.code = businessMsgEnum.code();
        this.message = businessMsgEnum.msg();
    }
	// get set method
}

In the construction method, the above custom exception enumeration class is passed in, so in the project, if there is new exception information that needs to be added, we can add it directly in the enumeration class, which is very convenient to achieve unified maintenance, and then get it when intercepting the exception.

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = LoggerFactory.getLogger(GlobalExceptionHandler.class);
    /**
     * Intercept business exceptions and return business exception information
     * @param ex
     * @return
     */
    @ExceptionHandler(BusinessErrorException.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleBusinessError(BusinessErrorException ex) {
        String code = ex.getCode();
        String message = ex.getMessage();
        return new JsonResult(code, message);
    }
}

In the business code, we can directly simulate throwing business exceptions and test:

@RestController
@RequestMapping("/exception")
public class ExceptionController {

    private static final Logger logger = LoggerFactory.getLogger(ExceptionController.class);

    @GetMapping("/business")
    public JsonResult testException() {
        try {
            int i = 1 / 0;
        } catch (Exception e) {
            throw new BusinessErrorException(BusinessMsgEnum.UNEXPECTED_EXCEPTION);
        }
        return new JsonResult();
    }
}

Run the following project, test it, and return json as follows, indicating that our customized business exception capture is successful:

{"code":"500","msg":"System exception, please contact administrator!"}

4. summary

This section mainly introduces the global exception handling of Spring Boot, including the encapsulation of exception information, the capture and handling of exception information, as well as the capture and handling of custom exception enumeration classes and business exceptions used in actual projects, which are widely used in projects. Basically, global exception handling is required in every project.

Course source code download address: Poke me downloading

Lesson 09: Faceted AOP processing in Spring Boot

1. What is AOP

AOP: short for Aspect Oriented Programming, which means: Aspect Oriented Programming. The goal of Aspect Oriented Programming is to separate concerns. What are the concerns? It's the focus. It's what you have to do. If you are a childe, you have no life goal. You can only know one thing every day: play (this is your focus, you only need to do it)! But there is a problem. Before you play, you still need to get up, wear clothes, wear shoes, fold quilts, make breakfast and so on. But you don't want to pay attention to these things, and you don't need to pay attention to them. You just want to play, so what do you do?

Yes! All these things are left to servants. You have A special servant A to help you dress, servant B to help you wear shoes, servant C to help you fold the quilt, servant D to help you cook, then you start to eat and play (this is the business of your day). After you finish your business, you come back, then A series of servants start to help you do this and that again, and then the day is over!

This is AOP. The advantage of AOP is that you only need to do your business and others help you. Maybe one day, if you want to run naked and don't want to wear clothes, then you can fire servant A! Maybe one day, before you go out, you want to bring some money. Then you hire another servant E to help you do the money work! This is AOP. Each person performs his or her own duties and flexibly combines them to achieve A configurable and pluggable program structure.

2. AOP processing in spring boot

2.1 AOP dependency

To use AOP, we first need to introduce the dependency of AOP.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-aop</artifactId>
</dependency>

2.2 AOP implementation

Using AOP in Spring Boot is very simple. If we want to print some logs in the project, after introducing the above dependency, we will create a new class, LogAspectHandler, to define the faceting and processing methods. Just add an @ Aspect annotation to the class. @The Aspect annotation is used to describe a tangent class, which needs to be marked when defining a tangent class. @The Component annotation lets Spring manage the class.

@Aspect
@Component
public class LogAspectHandler {

}

Here are some common notes and their use:

1.@Pointcut: defines a facet, that is, an entry to something that is concerned as described above.
2.@Before: something done before doing something.
3.@After: after doing something.
4.@AfterReturning: after doing something, enhance its return value.
5.@AfterThrowing: handle when doing something and throwing an exception.

2.2.1 @Pointcut annotation

@Pointcut annotation: used to define a facet (pointcut), which is the entry point of something concerned above. Pointcuts determine what concerns the join point, allowing us to control when notifications are executed.

@Aspect
@Component
public class LogAspectHandler {

    /**
     * Define a facet to intercept all methods under com.itcodai.course09.controller package and subpackage
     */
    @Pointcut("execution(* com.itcodai.course09.controller..*.*(..))")
    public void pointCut() {}
}

@Pointcut annotation specifies a facet to define what needs to be intercepted. Here are two common expressions: one is to use execution(), the other is to use annotation().
Take the expression execution(* com.itcodai.course09.controller.. *. * (..)) as an example. The syntax is as follows:

execution() is the body of the expression
Position of the first * sign: indicates the return value type, * indicates all types
Package name: indicates the package name to be intercepted. The following two periods indicate the current package and all subpackages of the current package. com.itcodai.course09.controller package and methods of all classes under the subpackages
Position of the second * sign: indicates the class name, * indicates all classes
*(..): This asterisk indicates the method name, * indicates all methods, followed by parentheses indicating the parameters of the method, and two periods indicating any parameters

annotation() is used to define facets for a certain annotation. For example, we can define facets for methods with @ GetMapping annotation as follows:

@Pointcut("@annotation(org.springframework.web.bind.annotation.GetMapping)")
public void annotationCut() {}

Then, using this aspect, we will cut into the annotation method of @ GetMapping. Because in the actual project, there may be different logical processing for different annotations, such as @ GetMapping, @ PostMapping, @ DeleteMapping, etc. So this way of cutting in according to annotations is also very common in actual projects.

2.2.2 @Before annotation

@The method specified in the Before annotation is executed Before cutting into the target method. It can do some log processing, and also do some information statistics, such as obtaining the user's request url and the user's ip address, etc. this method can be used when making a personal site, which is a common method. For example:

@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    /**
     * Execute this method before the facet method defined above
     * @param joinPoint jointPoint
     */
    @Before("pointCut()")
    public void doBefore(JoinPoint joinPoint) {
        logger.info("====doBefore Method in====");

        // Get signature
        Signature signature = joinPoint.getSignature();
        // Get the package name of the cut in
        String declaringTypeName = signature.getDeclaringTypeName();
        // Get the name of the method to be executed
        String funcName = signature.getName();
        logger.info("The method to be executed is: {},belong to{}package", funcName, declaringTypeName);
        
        // It can also be used to record some information, such as the url and ip of the request
        ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
        HttpServletRequest request = attributes.getRequest();
        // Get request url
        String url = request.getRequestURL().toString();
        // Get request ip
        String ip = request.getRemoteAddr();
        logger.info("User requested url For:{},ip The address is:{}", url, ip);
    }
}

The JointPoint object is very useful. You can use it to obtain a signature, and then use the signature to obtain the package name and method name of the request, including parameters (obtained through joinPoint.getArgs()).

2.2.3 @After annotation

@The After annotation corresponds to the @ Before annotation. The specified method is executed After cutting into the target method. You can also do some log processing After completing a method.

@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    /**
     * Define a facet to intercept all methods under the package com.itcodai.course09.controller
     */
    @Pointcut("execution(* com.itcodai.course09.controller..*.*(..))")
    public void pointCut() {}

    /**
     * Execute this method after the facet method defined above
     * @param joinPoint jointPoint
     */
    @After("pointCut()")
    public void doAfter(JoinPoint joinPoint) {

        logger.info("====doAfter Method in====");
        Signature signature = joinPoint.getSignature();
        String method = signature.getName();
        logger.info("Method{}Finished executing", method);
    }
}

Here, let's write a Controller to test the execution results. Create a new AopController as follows:

@RestController
@RequestMapping("/aop")
public class AopController {

    @GetMapping("/{name}")
    public String testAop(@PathVariable String name) {
        return "Hello " + name;
    }
}

Start the project, enter localhost:8080/aop/CSDN in the browser, and observe the output information of the console:

====The doBefore method has entered====  
The method to be executed is: testAop, belonging to com.itcodai.course09.controller.AopController package  
The url requested by the user is: http://localhost:8080/aop/name, and the ip address is: 0:0:0:0:0:0:0:1  
====The doAfter method has entered====  
Method testAop has finished executing

From the printed log, you can see the logic and order of program execution, and you can intuitively grasp the practical functions of @ Before and @ After annotations.

2.2.4 @AfterReturning annotation

@The AfterReturning annotation is similar to the @ After annotation. The difference is that the @ AfterReturning annotation can be used to capture the return value After the execution of the cut in method and enhance the business logic of the return value. For example:

@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    /**
     * Execute the cut method defined above after it is returned. You can capture or enhance the returned object
     * @param joinPoint joinPoint
     * @param result result
     */
    @AfterReturning(pointcut = "pointCut()", returning = "result")
    public void doAfterReturning(JoinPoint joinPoint, Object result) {

        Signature signature = joinPoint.getSignature();
        String classMethod = signature.getName();
        logger.info("Method{}After execution, the return parameters are:{}", classMethod, result);
        // In actual projects, specific return value enhancements can be made according to the business
        logger.info("Business enhancements to return parameters:{}", result + "Enhanced Edition");
    }
}

Note: in the @ AfterReturning annotation, the value of the property returning must be consistent with the parameter, otherwise it will not be detected. The second input parameter in this method is the return value of the cut method. In the doAfterReturning method, the return value can be enhanced and encapsulated according to the business needs. Let's restart the service and test it again (I won't post the redundant log s):

Method testAop is completed, and the return parameter is: Hello CSDN  
Business enhancements to return parameters: Hello CSDN enhanced

2.2.5 @AfterThrowing annotation

As the name implies, the @ AfterThrowing annotation means that when an exception is thrown when the cut method is executed, it will enter the @ AfterThrowing annotation's method for execution. In this method, some exception handling logic can be made. Note that the value of the throwing property must be consistent with the parameter, otherwise an error will be reported. The second input parameter in this method is the exception thrown.

/**
 * Using AOP to process log
 * @author shengwu ni
 * @date 2018/05/04 20:24
 */
@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    /**
     * When the section method defined above performs throw exception, execute the method
     * @param joinPoint jointPoint
     * @param ex ex
     */
    @AfterThrowing(pointcut = "pointCut()", throwing = "ex")
    public void afterThrowing(JoinPoint joinPoint, Throwable ex) {
        Signature signature = joinPoint.getSignature();
        String method = signature.getName();
        // Logic for handling exceptions
        logger.info("Execution method{}Error with exception:{}", method, ex);
    }
}

I will not test this method. You can test it yourself.

3. summary

This lesson gives a detailed explanation of AOP in Spring Boot. It mainly introduces the introduction of AOP in Spring Boot, the use of common annotations, the use of parameters, and the introduction of common APIs. AOP is very useful in practical projects. It can be used to preprocess or enhance the aspect method before and after implementation according to the specific business. It can also be used as exception capture processing. It can be used reasonably according to the specific business scenarios.

Course source code download address: Poke me downloading

Lesson 10: Spring Boot integration with MyBatis

1. Introduction to mybatis

As you all know, mybatis framework is a persistence layer framework, which is the top-level project under Apache. Mybatis allows developers to focus on sql. Through the mapping method provided by mybatis, they can freely and flexibly generate sql statements that meet their needs. Using simple XML or annotation to configure and map native information and mapping interface and Java POJOs to records in database occupy half of the country. There are two main ways to explain the Spring Boot integration of mybatis. Focus on the annotation based approach. Because in the actual project, the way of using annotation is a little more simple, which saves a lot of XML configuration (this is not absolute, some project groups may also use XML).

2. Configuration of mybatis

2.1 dependency import

Spring Boot integrates with MyBatis. It needs to import the dependency of MyBatis Spring Boot starter and mysql. Here we use version 1.3.2, as follows:

<dependency>
	<groupId>org.mybatis.spring.boot</groupId>
	<artifactId>mybatis-spring-boot-starter</artifactId>
	<version>1.3.2</version>
</dependency>
<dependency>
	<groupId>mysql</groupId>
	<artifactId>mysql-connector-java</artifactId>
	<scope>runtime</scope>
</dependency>

We click on mybatis Spring Boot starter dependency, and we can see the familiar dependency when we used Spring before. As I introduced at the beginning of the course, Spring Boot is committed to simplifying coding, and integrating relevant dependencies with starter series. Developers do not need to pay attention to tedious configuration, which is very convenient.

<!-- Save others -->
<dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis</artifactId>
</dependency>
<dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis-spring</artifactId>
</dependency>

2.2 properties.yml configuration

Let's take a look at the basic configuration in the properties.yml configuration file when integrating MyBatis?

# Service port number
server:
  port: 8080

# Database address
datasource:
  url: localhost:3306/blog_test

spring:
  datasource: # Database configuration
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://${datasource.url}?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true&autoReconnect=true&failOverReadOnly=false&maxReconnects=10
    username: root
    password: 123456
    hikari:
      maximum-pool-size: 10 # Maximum number of connection pools
      max-lifetime: 1770000

mybatis:
  # Specify the package of alias settings as all entities
  type-aliases-package: com.itcodai.course10.entity
  configuration:
    map-underscore-to-camel-case: true # Hump nomenclature
  mapper-locations: # mapper map file location
    - classpath:mapper/*.xml

Let's briefly introduce the above configurations: I won't explain the related configurations of the database in detail. I believe you are very skilled at this. Configure the user name, password, database connection, etc. the connection pool used here is the hikari brought by Spring Boot. Interested friends can go to Baidu or Google to search for it.

Here's a description of map under core to camel case: true, which is used to open the hump naming specification. This is easy to use. For example, if the field name in the database is user'u name, then the attribute can be defined as username in the entity class (or even written as username, which can also be mapped), will be automatically matched to the hump attribute. If it is not configured in this way, it will not be mapped for different field names and attribute names.

3. Integration based on xml

Using the original XML method, we need to create a new UserMapper.xml file. In the application.yml configuration file above, we have defined the path of the XML file: classpath:mapper/*.xml, so we create a new mapper folder under the resources directory, and then create a UserMapper.xml file.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.itcodai.course10.dao.UserMapper">
  <resultMap id="BaseResultMap" type="com.itcodai.course10.entity.User">

    <id column="id" jdbcType="BIGINT" property="id" />
    <result column="user_name" jdbcType="VARCHAR" property="username" />
    <result column="password" jdbcType="VARCHAR" property="password" />
  </resultMap>
  
   <select id="getUserByName" resultType="User" parameterType="String">
       select * from user where user_name = #{username}
  </select>
</mapper>

This is the same as integrating Spring. The corresponding Mapper is specified in namespace, and the corresponding entity class, User, is specified in < resultmap >. Then specify the fields of the table and the attributes of the entity. Here we write an sql to query the User according to the User name.

There are id, username and password in the entity class. I don't post the code here. You can download the source code to check. Write an interface in UserMapper.java file:

User getUserByName(String username);

Omit the service code in the middle. Write a Controller to test:

@RestController
public class TestController {

    @Resource
    private UserService userService;
    
    @RequestMapping("/getUserByName/{name}")
    public User getUserByName(@PathVariable String name) {
        return userService.getUserByName(name);
    }
}

Start the project, and enter: http://localhost:8080/getUserByName/CSDN in the browser to query the user information with the user name of CSDN in the database table (just get two data in advance):

{"id":2,"username":"CSDN","password":"123456"}

Note here: how does Spring Boot know about mapper? One method is to add @ mapper annotation on the corresponding classes of the mapper layer above, but this method has a disadvantage. When we have many mappers, we need to add @ mapper annotation on each class. Another easy way is to add @ MaperScan annotation on Spring Boot startup class to scan all mappers under a package. As follows:

@SpringBootApplication
@MapperScan("com.itcodai.course10.dao")
public class Course10Application {

	public static void main(String[] args) {
		SpringApplication.run(Course10Application.class, args);
	}
}

In this way, all mapper s under the package com.itcodai.course10.dao will be scanned.

4. Annotation based integration

Annotation based integration does not need xml configuration files. MyBatis mainly provides four annotations, @ Select, @ Insert, @ Update, and Delete. These four annotations are very common and simple. Just follow the corresponding sql statement after the annotation. Let's take an example:

@Select("select * from user where id = #{id}")
User getUser(Long id);

This is the same as writing sql statements in xml files, so there is no need for xml files, but there is a question, someone may ask, if there are two parameters? If there are two parameters, we need to use @ Param annotation to specify the corresponding relationship of each parameter, as follows:

@Select("select * from user where id = #{id} and user_name=#{name}")
User getUserByIdAndName(@Param("id") Long id, @Param("name") String username);

It can be seen that @ Param should specify the same parameter name as {} in sql, otherwise it cannot be obtained. You can test it yourself in the controller. The interfaces are all in the source code. In this article, I will not paste the test code and results.

There is a problem that we need to pay attention to. Generally, after designing table fields, we will generate entity classes according to the automatic generation tool. In this way, entity classes can basically correspond to table fields, at least to humps. Since hump configuration is enabled in the above configuration file, all fields can be matched. But what if there is something wrong? We also have a solution, using the @ Results annotation.

@Select("select * from user where id = #{id}")
@Results({
        @Result(property = "username", column = "user_name"),
        @Result(property = "password", column = "password")
})
User getUser(Long id);

@The @ Result annotation in Results is used to specify the corresponding relationship between each property and field, so that the above problem can be solved.

Of course, we can also use XML and annotation together. At present, our actual projects also use the mixed way, because sometimes XML is convenient and sometimes annotation is convenient. For example, if we define the UserMapper.xml above, we can completely use @ ResultMap annotation to replace @ Results annotation, as follows:

@Select("select * from user where id = #{id}")
@ResultMap("BaseResultMap")
User getUser(Long id);

@Where do the values in the ResultMap annotation come from? Corresponding to the id value of < ResultMap > defined in UserMapper.xml file:

<resultMap id="BaseResultMap" type="com.itcodai.course10.entity.User">

This combination of xml and annotation is also common, and also reduces a lot of code. Because xml files can be generated using automatic generation tools, and do not need to be manually typed, this way of use is also common.

5. summary

This lesson mainly explains the process of Spring Boot integrating MyBatis systematically, which is divided into xml based and annotation based forms. It explains the use of MyBatis in Spring Boot through the actual configuration of the handle, and explains the common problems that have been solved for the annotation mode, which has strong practical significance. In the actual project, it is recommended to determine which way to use according to the actual situation. Generally, xml and annotation are used.

Course source code download address: Poke me downloading

Lesson 11: Spring Boot transaction configuration management

1. Business related

Scenario: when we develop enterprise applications, there may be various unpredictable problems on the line due to the sequential execution of data operations, and any step of operation may be abnormal, which will lead to the subsequent operation cannot be completed. At this time, because the business logic is not completed correctly, the action of operating the database before is not reliable. In this case, data rollback is required.

The purpose of a transaction is to ensure that every operation of the user is reliable. Every step of the transaction must be executed successfully. If there is an exception, it will go back to the state where the transaction has not been operated. It's easy to understand that transfer, ticket purchase, etc. can only be executed successfully for this event after the whole event process is completed. It can't transfer money to half, the system is dead, the transferor's money is gone, and the payee's money hasn't arrived yet.

Transaction management is one of the most commonly used functions in the Spring Boot framework. In the actual application development, we are basically in the service Transaction should be added when the layer processes business logic. Of course, sometimes it may not be necessary to add transaction because of the needs of the scenario (for example, we need to insert data into a table, which has no influence on each other and how much we insert. We cannot roll back all the previously inserted data because some data is hung).

2. Spring Boot transaction configuration

2.1 dependency import

To use transactions in Spring Boot, you need to import mysql dependency:

<dependency>
	<groupId>org.mybatis.spring.boot</groupId>
	<artifactId>mybatis-spring-boot-starter</artifactId>
	<version>1.3.2</version>
</dependency>

After importing mysql dependency, Spring Boot will automatically inject DataSourceTransactionManager. We can use @ Transactional annotation for transaction without any other configuration. About the configuration of mybatis, it has been explained in the previous lesson. Here, you can use the mybatis configuration in the previous lesson.

2.2 transaction testing

First, we insert a piece of data into the database table:

id user_name password
1 Ni Sheng Wu 123456

Then we write an inserted mapper:

public interface UserMapper {

    @Insert("insert into user (user_name, password) values (#{username}, #{password})")
    Integer insertUser(User user);
}

OK, let's test the transaction processing in Spring Boot. In the service layer, we manually throw an exception to simulate the actual exception, and then observe whether the transaction is rolled back. If there is no new record in the database, the transaction is rolled back successfully.

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;

    @Override
    @Transactional
    public void isertUser(User user) {
        // Insert user information
        userMapper.insertUser(user);
        // Throw an exception manually
        throw new RuntimeException();
    }
}

Let's test it:

@RestController
public class TestController {

    @Resource
    private UserService userService;

    @PostMapping("/adduser")
    public String addUser(@RequestBody User user) throws Exception {
        if (null != user) {
            userService.isertUser(user);
            return "success";
        } else {
            return "false";
        }
    }
}

We use postman to call the interface, because an exception is thrown in the program, which will cause transaction rollback. We refresh the database without adding a record, indicating that the transaction is effective. The transaction is very simple. When we use it, we usually don't have many problems, but it's not only that

3. Summary of common problems

As can be seen from the above content, using transactions in Spring Boot is very simple, @ Transactional Annotation can solve the problem, so to speak, but in the actual project, there are a lot of small pits waiting for us, which we didn't notice when we wrote the code, and it's not easy to find these small pits under normal circumstances. When the project is written large, suddenly there is a problem one day, it's very difficult to find the problem, and it will take a lot of effort to catch up Go and find out the problem.

In this section, I will summarize the details related to the transaction that often appear in the actual project. I hope the readers can implement it in their own project after reading it and benefit from it.

3.1 exceptions are not "caught" to

The first thing to say is that the exception has not been "caught" and the transaction has not been rolled back. In the business layer code, we may have considered the existence of exceptions, or the editor has indicated that we need to throw exceptions, but there is one thing we need to pay attention to: it is not that we throw exceptions out, and transactions will roll back if there is an exception. Let's take an example:

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;
    
    @Override
    @Transactional
    public void isertUser2(User user) throws Exception {
        // Insert user information
        userMapper.insertUser(user);
        // Throw an exception manually
        throw new SQLException("Database exception");
    }
}

We can see that the above code has no problem. Manually throw a SQLException to simulate the exception occurred in the actual operation of the database. In this method, since the exception is thrown, the transaction should be rolled back, but not in reality. The reader can use the controller interface in my source code through postman After testing, you will find that you can still insert a piece of user data.

So what's the problem? Because the default transaction rule of Spring Boot is to encounter runtime exception and program Error before rolling back. For example, the RuntimeException thrown in our example above has no problem, but the SQLException thrown cannot be rolled back. For non runtime exceptions, if you want to roll back transactions, you can use the rollback for attribute in the @ Transactional annotation to specify exceptions, such as @ Transactional (rollback for = exception. Class), so there is no problem, so in the actual project, you must specify exceptions.

3.2 abnormal eating

This title is funny. How can it be eaten? Or return to the real project. When dealing with exceptions, we have two ways. One is to throw them out and let the upper layer catch them; the other is to try to catch them and deal with them where they occur. Just because of this try catch causes the exception to be "eaten" and the transaction cannot be rolled back. Let's look at the example above, but simply modify the code as follows:

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;

    @Override
    @Transactional(rollbackFor = Exception.class)
    public void isertUser3(User user) {
        try {
            // Insert user information
            userMapper.insertUser(user);
            // Throw an exception manually
            throw new SQLException("Database exception");
        } catch (Exception e) {
			// Exception handling logic
        }
    }
}

You can use the controller interface in my source code. After you test with postman, you will find that you can still insert a piece of user data, indicating that the transaction has not been rolled back due to throwing an exception. This detail is often more difficult to find than the pit above, because our thinking can easily lead to try Once there is such a problem in the generation of catch code, it is often difficult to find out. Therefore, when we write code, we must think more about it, pay more attention to such details, and try to avoid burying pits for ourselves.

How to solve this problem? Just throw it up and give it a higher level to deal with. Don't "eat" the exception yourself in the transaction.

3.3 scope of business

The business scope is deeper than the above two pits! The reason why I write this is because I met it in the actual project before. I won't simulate this scene in this course. I'll write a demo for you to see. Just remember this pit. When writing code, when there is a concurrency problem, you will pay attention to this pit. Then this lesson is valuable.

Let me write a demo:

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;

    @Override
    @Transactional(rollbackFor = Exception.class)
    public synchronized void isertUser4(User user) {
        // Specific business in practice
        userMapper.insertUser(user);
    }
}

As you can see, to consider concurrency, I added a synchronized keyword to the method of the business layer code. For example, in a database, if there is only one record for a user, the next insert action will first determine whether there is the same user in the database. If there is one, it will not be inserted, it will be updated, and it will not be inserted. Therefore, in theory, there will always be one piece of the same user information in the database, and there will be no two inserts in the same database Information of the same user.

But in the process of pressure testing, the above problems will appear. There are indeed two pieces of information of the same user in the database. The analysis of the reasons lies in the scope of transactions and the scope of locks.

As you can see from the above method, the transaction is added to the method. That is to say, at the beginning of the method execution, the transaction is started, and after the execution, the transaction is closed. But synchronized doesn't work, because the transaction scope is larger than the lock scope. That is to say, the lock is released after the code with lock is executed, but the transaction is not finished. At this time, another thread comes in. If the transaction is not finished, when the second thread comes in, the state of database is the same as that of the first thread. That is to say, the default isolation level of mysql Innodb engine is repeatable reading (in the same transaction, the result of SELECT is the state of the time point at the beginning of the transaction). When the second transaction of thread starts, the first transaction of thread has not been submitted yet, so the read data has not been updated. The second thread also inserts, causing dirty data.

This problem can be avoided. First, remove the transaction (not recommended); second, add a lock where the service is called to ensure that the scope of the lock is larger than that of the transaction.

4. summary

This chapter mainly summarizes how to use transactions in Spring Boot. As long as @ Transactional annotation is used, it is very simple and convenient. In addition, it focuses on three possible pitfalls in the actual project, which is of great significance. It's OK that there are no problems in the transaction, and it's difficult to troubleshoot when there are problems. Therefore, it's hoped that the three points summarized can help the friends in the development.

Course source code download address: Poke me downloading

Lesson 12: using listeners in Spring Boot

1. Monitor introduction

What is a web listener? Web listener is a special class in Servlet, which can help developers monitor specific events in the web, such as the creation and destruction of ServletContext, HttpSession, ServletRequest, variable creation, destruction and modification, etc. Processing can be added before and after some actions to realize monitoring.

2. Use of listeners in spring boot

There are many usage scenarios of web listener, such as listening to servlet context to initialize some data, listening to http session to obtain the current number of people online, listening to servlet request object requested by client to obtain user access information, and so on. In this section, we mainly learn about the use of listeners in Spring Boot through these three actual use scenarios.

2.1 listening to Servlet context object

Listening to servlet context objects can be used to initialize data for caching. What do you mean? I take a very common scenario, for example, when users click on the homepage of a site, they will generally show some information of the homepage, which is basically or most of the time unchanged, but the information is from the database. If the user needs to get data from the database every time he clicks, it's acceptable to have a small number of users. If the number of users is very large, it's also a huge expense to the database.

For this kind of homepage data, if most of them are not updated frequently, we can completely cache them. Every time a user clicks, we take them directly from the cache, which can not only improve the access speed of the homepage, but also reduce the pressure of the server. If you are more flexible, you can add a timer to update the homepage cache regularly. Similar to the CSDN personal blog home page ranking changes.

Next, we will write a demo for this function. In practice, readers can fully apply this code to implement the relevant logic in their own projects. First, write a Service to simulate querying data from the database:

@Service
public class UserService {

    /**
     * Get user information
     * @return
     */
    public User getUser() {
        // In practice, the corresponding information will be queried from the database according to the specific business scenario
        return new User(1L, "Ni Sheng Wu", "123456");
    }
}

Then write a listener, implement the applicationlistener < ContextRefreshedEvent > interface, rewrite the onApplicationEvent method, and pass the ContextRefreshedEvent object in. If we want to refresh the preloaded resources when loading or refreshing the application context, we can do this by listening to the ContextRefreshedEvent. As follows:

/**
 * Use ApplicationListener to initialize some data to the listener in the application domain
 * @author shengni ni
 * @date 2018/07/05
 */
@Component
public class MyServletContextListener implements ApplicationListener<ContextRefreshedEvent> {

    @Override
    public void onApplicationEvent(ContextRefreshedEvent contextRefreshedEvent) {
        // Get the application context first
        ApplicationContext applicationContext = contextRefreshedEvent.getApplicationContext();
        // Get the corresponding service
        UserService userService = applicationContext.getBean(UserService.class);
        User user = userService.getUser();
        // Get the application domain object, and put the found information into the application domain
        ServletContext application = applicationContext.getBean(ServletContext.class);
        application.setAttribute("user", user);
    }
}

As described in the note, first get the application context through contextRefreshedEvent, and then get the UserService bean through the application context. In the project, you can get other beans according to the actual business scenario, and then call your own business code to get the corresponding data, and finally store it in the application domain, so that When the client requests the corresponding data, we can get the information directly from the application domain to reduce the pressure of the database. Next write a Controller to get user information directly from the application domain to test it.

@RestController
@RequestMapping("/listener")
public class TestController {

    @GetMapping("/user")
    public User getUser(HttpServletRequest request) {
        ServletContext application = request.getServletContext();
        return (User) application.getAttribute("user");
    }
}

Start the project and enter http://localhost:8080/listener/user in the browser to test it. If the user information is returned normally, the data has been cached successfully. However, application is cached in memory, which consumes memory. I will talk about redis in the following courses, and then I will introduce redis caching to you.

2.2 listening to HTTP Session object

The listener is also commonly used to monitor the session object to get the number of online users. Now many developers have their own websites. It is a very common use scenario to monitor the session to get the number of current users. Here's how to use it.

/**
 * A listener that uses HttpSessionListener to count the number of online users
 * @author shengwu ni
 * @date 2018/07/05
 */
@Component
public class MyHttpSessionListener implements HttpSessionListener {

    private static final Logger logger = LoggerFactory.getLogger(MyHttpSessionListener.class);

    /**
     * Record the number of users online
     */
    public Integer count = 0;

    @Override
    public synchronized void sessionCreated(HttpSessionEvent httpSessionEvent) {
        logger.info("New users are online");
        count++;
        httpSessionEvent.getSession().getServletContext().setAttribute("count", count);
    }

    @Override
    public synchronized void sessionDestroyed(HttpSessionEvent httpSessionEvent) {
        logger.info("The user is offline");
        count--;
        httpSessionEvent.getSession().getServletContext().setAttribute("count", count);
    }
}

It can be seen that first, the listener needs to implement the HttpSessionListener interface, then rewrite the sessionCreated and sessionDestroyed methods, pass an HttpSessionEvent object in the sessionCreated method, and then add 1 to the number of users in the current session. The sessionDestroyed method is just the opposite and will not be repeated. Then we write a Controller to test it.

@RestController
@RequestMapping("/listener")
public class TestController {

    /**
     * Get the current number of people online. There are bug s in this method
     * @param request
     * @return
     */
    @GetMapping("/total")
    public String getTotalUser(HttpServletRequest request) {
        Integer count = (Integer) request.getSession().getServletContext().getAttribute("count");
        return "Current online population:" + count;
    }
}

In the Controller, you can directly obtain the number of users in the current session, start the server, enter localhost:8080/listener/total in the browser, and you can see that the returned result is 1, and then open a browser, and you can see that the count is 2 when you request the same address, which is no problem. But if you close a browser and open it again, it should be 2 in theory, but the actual test is 3. The reason is that the session destruction method is not implemented (you can observe the log printing in the background console). When you reopen it, the server cannot find the user's original session, so it recreates a session. How can you solve this problem? We can change the above Controller method as follows:

@GetMapping("/total2")
public String getTotalUser(HttpServletRequest request, HttpServletResponse response) {
    Cookie cookie;
    try {
        // Record the sessionId in the browser
        cookie = new Cookie("JSESSIONID", URLEncoder.encode(request.getSession().getId(), "utf-8"));
        cookie.setPath("/");
        //Set the cookie validity period to 2 days, set it a little longer
        cookie.setMaxAge( 48*60 * 60);
        response.addCookie(cookie);
    } catch (UnsupportedEncodingException e) {
        e.printStackTrace();
    }
    Integer count = (Integer) request.getSession().getServletContext().getAttribute("count");
    return "Current online population:" + count;
}

It can be seen that the processing logic is to make the server remember the original session, that is, record the original sessionId in the browser, and transfer the sessionId to the next time it is opened, so that the server will not be recreated. Restart the server and test again in the browser to avoid the above problems.

2.3 listen to client request Servlet Request object

Using the listener to obtain the user's access information is relatively simple. It is enough to implement the ServletRequestListener interface, and then obtain some information through the request object. As follows:

/**
 * Using ServletRequestListener to get access information
 * @author shengwu ni
 * @date 2018/07/05
 */
@Component
public class MyServletRequestListener implements ServletRequestListener {

    private static final Logger logger = LoggerFactory.getLogger(MyServletRequestListener.class);

    @Override
    public void requestInitialized(ServletRequestEvent servletRequestEvent) {
        HttpServletRequest request = (HttpServletRequest) servletRequestEvent.getServletRequest();
        logger.info("session id For:{}", request.getRequestedSessionId());
        logger.info("request url For:{}", request.getRequestURL());

        request.setAttribute("name", "Ni Sheng Wu");
    }

    @Override
    public void requestDestroyed(ServletRequestEvent servletRequestEvent) {

        logger.info("request end");
        HttpServletRequest request = (HttpServletRequest) servletRequestEvent.getServletRequest();
        logger.info("request Saved in domain name The value is:{}", request.getAttribute("name"));

    }

}

This is relatively simple, and I won't repeat it. Next, write a Controller test.

@GetMapping("/request")
public String getRequestInfo(HttpServletRequest request) {
    System.out.println("requestListener Initialized in name Data:" + request.getAttribute("name"));
    return "success";
}

3. Custom event listening in spring boot

In actual projects, we often need to customize some events and listeners to meet business scenarios. For example, in microservice, microservice A needs to inform microservice B to process another logic after processing A logic, or microservice A needs to synchronize data to microservice after processing A logic B. This scenario is very common. At this time, we can customize events and listeners to listen. Once an event occurs in microservice A, we will notify microservice B to process the corresponding logic.

3.1 custom events

The custom event needs to inherit the ApplicationEvent object, define a User object in the event to simulate the data, and pass the User object in the construction method for initialization. As follows:

/**
 * Custom events
 * @author shengwu ni
 * @date 2018/07/05
 */
public class MyEvent extends ApplicationEvent {

    private User user;

    public MyEvent(Object source, User user) {
        super(source);
        this.user = user;
    }

    // Omit get and set methods
}

3.2 custom listener

Next, customize a listener to listen to the MyEvent event event defined above. The customized listener needs to implement the ApplicationListener interface. As follows:

/**
 * Custom listener, listening for MyEvent events
 * @author shengwu ni
 * @date 2018/07/05
 */
@Component
public class MyEventListener implements ApplicationListener<MyEvent> {
    @Override
    public void onApplicationEvent(MyEvent myEvent) {
        // Get information from events to
        User user = myEvent.getUser();
        // Handling events, notifying other microservices or handling other logic in the actual project, etc
        System.out.println("User name:" + user.getUsername());
        System.out.println("Password:" + user.getPassword());

    }
}

Then override the onApplicationEvent method to pass in the custom MyEvent event, because in this event, we define the User object (the object is the data to be processed in practice, which will be simulated later), and then we can use the information of the object.

OK, after defining the event and listener, you need to manually publish the event so that the listener can listen. This needs to be triggered according to the actual business scenario. For the example in this article, I write a trigger logic, as follows:

/**
 * UserService
 * @author shengwu ni
 */
@Service
public class UserService {

    @Resource
    private ApplicationContext applicationContext;

    /**
     * Release events
     * @return
     */
    public User getUser2() {
        User user = new User(1L, "Ni Sheng Wu", "123456");
        // Release events
        MyEvent event = new MyEvent(this, user);
        applicationContext.publishEvent(event);
        return user;
    }
}

Inject ApplicationContext into the service, and after the business code is processed, manually publish the MyEvent event event through the ApplicationContext object, so that our custom listener can listen, and then process the business logic written in the listener.

Finally, write an interface in the Controller to test:

@GetMapping("/request")
public String getRequestInfo(HttpServletRequest request) {
    System.out.println("requestListener Initialized in name Data:" + request.getAttribute("name"));
    return "success";
}

Enter http://localhost:8080/listener/publish in the browser, and then observe the user name and password printed on the console to show that the custom listener has taken effect.

4. summary

This lesson systematically introduces the principle of the listener, and how to use the listener in Spring Boot. It lists three common cases of the listener, which is of great practical significance. Finally, it explains how to customize events and listeners in the project, and gives specific code models based on common scenarios in microservice, which can be applied to actual projects, hoping that readers can digest them seriously.

Course source code download address: Poke me downloading

Lesson 13: using interceptors in Spring Boot

The principle of interceptor is very simple. It is an implementation of AOP. It specially intercepts the background requests for dynamic resources, that is, intercepts the requests for the control layer. There are more use scenarios to judge whether users have permission to request backstage, and there are also higher use scenarios. For example, interceptors can be used together with websocket to intercept websocket requests, and then do corresponding processing. The interceptor will not intercept static resources. The default static directory of Spring Boot is resources/static. The static pages, js, css, pictures, etc. in this directory will not be intercepted (it depends on how to implement them. Some cases will also be intercepted, which I will point out below).

1. Fast use of interceptors

It's easy to use an interceptor in two steps: define the interceptor and configure the interceptor. In the configuration interceptor, the version after Spring Boot 2.0 is different from the previous version. I will focus on the possible pits here.

1.1 defining interceptors

To define an interceptor, you only need to implement the HandlerInterceptor interface. The HandlerInterceptor interface is the ancestor of all custom interceptors or interceptors provided by Spring Boot. So, first of all, let's understand this interface. There are three methods in this interface: preHandle( ),postHandle(…… )And afterCompletion( )

preHandle(…… )Method: the execution time of this method is when a url has been matched to a method in the corresponding Controller, and before this method is executed. So prehandle ( )Method can decide whether to release the request. This is determined by the return value. If true is returned, the request will be released. If false is returned, the request will not be released.
postHandle(…… )Method: the execution time of this method is when a url has been matched to a method in the corresponding Controller, and the method is finished, but before the dispatcher servlet view is rendered. So there is a ModelAndView parameter in this method. You can make some modifications here.
afterCompletion(…… )Method: as the name implies, this method is executed after the whole request processing (including view rendering). At this time, some resources are cleaned up. This method can only be used in preHandle( )It will not be executed until it is executed successfully and returns true.

Now that you understand the interface, customize an interceptor.

/**
 * custom interceptor 
 * @author shengwu ni
 * @date 2018/08/03
 */
public class MyInterceptor implements HandlerInterceptor {

    private static final Logger logger = LoggerFactory.getLogger(MyInterceptor.class);

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

        HandlerMethod handlerMethod = (HandlerMethod) handler;
        Method method = handlerMethod.getMethod();
        String methodName = method.getName();
        logger.info("====Intercepted method:{},Execute before the method executes====", methodName);
        // Return true to continue execution. Return false to cancel the current request
        return true;
    }

    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
        logger.info("Execute after method execution(Controller After method call),But the view has not been rendered yet");
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
        logger.info("The whole request has been processed, DispatcherServlet I also rendered the corresponding view. Now I can do some cleaning work");
    }
}

OK, so far, the interceptor has been defined. The next step is to configure the interceptor.

1.2 configure interceptors

Before Spring Boot 2.0, we inherited the WebMvcConfigurerAdapter class directly, and then overridden the addInterceptors method to implement the interceptor configuration. However, after Spring Boot 2.0, this method has been abandoned (or can continue to be used), instead of the WebMvcConfigurationSupport method, as follows:

@Configuration
public class MyInterceptorConfig extends WebMvcConfigurationSupport {

    @Override
    protected void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(new MyInterceptor()).addPathPatterns("/**");
        super.addInterceptors(registry);
    }
}

In this configuration, we rewrite the addInterceptors method to add our customized interceptor. The addPathPatterns method is to add requests to be intercepted. Here we intercept all requests. In this way, the interceptor is configured. Next, write a Controller test:

@Controller
@RequestMapping("/interceptor")
public class InterceptorController {

    @RequestMapping("/test")
    public String test() {
        return "hello";
    }
}

Let it jump to the hello.html page and output the hello interceptor directly in hello.html. Start the project, enter localhost:8080/interceptor/test in the browser to check the log of the console:

====Intercepted method: test, executed before the method execution====  
Execute after the method is executed (after the Controller method call), but the view rendering has not been performed at this time  
The whole request has been processed. The dispatcher servlet also renders the corresponding view. Now I can do some cleaning work

You can see that the interceptor is in effect, and you can see the order in which it is executed.

1.3 solve the problem of static resources being intercepted

The definition and configuration of interceptors have been described above, but is that ok? In fact, if we use the above configuration, we will find that the static resource is blocked. You can place a picture resource or html file in the resources/static / directory, and then start the project to access it directly. You can see the phenomenon of inaccessibility.

In other words, although Spring Boot 2.0 discards the webmvconfigureradapter, webmvconfigurationsupport will cause the default static resources to be blocked, which requires us to manually release the static resources.

How to let go? In addition to overriding the addInterceptors method in MyInterceptorConfig configuration class, another method needs to be overridden: addResourceHandlers to release static resources:

/**
 * It is used to specify that static resources are not blocked. Otherwise, inheriting WebMvcConfigurationSupport will cause static resources to be inaccessible directly
 * @param registry
 */
@Override
protected void addResourceHandlers(ResourceHandlerRegistry registry) {
    registry.addResourceHandler("/**").addResourceLocations("classpath:/static/");
    super.addResourceHandlers(registry);
}

After this configuration, restart the project, and static resources can be accessed normally. If you are good at learning or research, you will not stop here. Yes, the above method can solve the problem that static resources cannot be accessed, but there are more convenient ways to configure.

Instead of inheriting the WebMvcConfigurationSupport class, we directly implement the WebMvcConfigurer interface, and then rewrite the addInterceptors method to add the custom interceptor, as follows:

@Configuration
public class MyInterceptorConfig implements WebMvcConfigurer {
    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        // Implementing WebMvcConfigurer will not cause static resources to be blocked
        registry.addInterceptor(new MyInterceptor()).addPathPatterns("/**");
    }
}

This is very convenient. When the WebMvcConfigure interface is implemented, the Spring Boot default static resources will not be intercepted.

These two methods can be used to specify the details between them, and interested readers can do further research. Because of the difference between the two methods, the way of inheriting the WebMvcConfigurationSupport class can be used in projects with front and back ends separated, and the background does not need to access static resources (it is not necessary to release static resources); the implementation of WebMvcConfigure The way of interface can be used in projects without front and back separation, because it needs to read some pictures, css, js files and so on.

2. Use example of interceptor

2.1 judge whether the user is logged in

We can do this for the general user login function, either write a user in the session, or generate a token for each second user. The second method is better. If the user logs in successfully, the token of the user will be brought with each request. If the user does not log in, there is no token, and the server can detect the token Parameters to determine whether the user is logged in, so as to achieve the interception function. Let's change the preHandle method as follows:

@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

    HandlerMethod handlerMethod = (HandlerMethod) handler;
    Method method = handlerMethod.getMethod();
    String methodName = method.getName();
    logger.info("====Intercepted method:{},Execute before the method executes====", methodName);

    // Judge whether the user has logged in. Generally, after logging in, the user has a corresponding token
    String token = request.getParameter("token");
    if (null == token || "".equals(token)) {
        logger.info("The user is not logged in and has no authority to execute Please login");
        return false;
    }

    // Return true to continue execution. Return false to cancel the current request
    return true;
}

Restart the project, enter localhost:8080/interceptor/test in the browser, check the console log, and find that it is blocked. If you enter localhost:8080/interceptor/test?token=123 in the browser, you can go down normally.

2.2 cancel interception

According to the above, if I want to intercept all url requests starting with / Admin, I need to add this prefix to the interceptor configuration. However, in the actual project, there may be a scenario: a request starts with / Admin, but it cannot be intercepted, such as / admin/login, etc., which needs to be configured. So, can I make something similar to a switch, where I don't need to intercept, where can I make a switch, and make this kind of flexible pluggable effect?

Yes, we can define a comment, which is specially used to cancel the blocking operation. If we don't need to block a method in a Controller, we can add our own custom comment to the method. First, define a comment:

/**
 * This annotation is used to specify that a method does not need to be intercepted
 */
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface UnInterception {
}

Then add the annotation to a method in the Controller, and add the logic of canceling the interception in the interceptor processing method, as follows:

@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

    HandlerMethod handlerMethod = (HandlerMethod) handler;
    Method method = handlerMethod.getMethod();
    String methodName = method.getName();
    logger.info("====Intercepted method:{},Execute before the method executes====", methodName);

    // Through the method, you can get the custom annotation on the method, and then judge whether the method is to be blocked by the annotation
    // @UnInterception is our custom annotation
    UnInterception unInterception = method.getAnnotation(UnInterception.class);
    if (null != unInterception) {
        return true;
    }
    // Return true to continue execution. Return false to cancel the current request
    return true;
}

For the method code in the Controller, please refer to the source code. Restart the project and enter http://localhost:8080/interceptor/test2?token=123 in the browser to test. You can see that the method with this annotation will not be blocked.

3. summary

This section mainly introduces the use of interceptors in Spring Boot, from the creation and configuration of interceptors to the impact of interceptors on static resources. After Spring Boot 2.0, interceptors can be configured in two ways, which can be selected according to the actual situation. Finally, combined with the actual use, two common scenarios are given, hoping that readers can digest and master the use of interceptors.

Course source code download address: Poke me downloading

Lesson 14: integrating Redis in Spring Boot

1. Introduction to redis

Redis is a kind of non relational database (NoSQL). NoSQL is stored in the form of key value. Unlike traditional relational database, it does not necessarily follow some basic requirements of traditional database, such as SQL standard, ACID attribute, table structure, etc. this kind of database mainly has the following characteristics: non relational, distributed, open-source, horizontally scalable.
NoSQL usage scenarios include: high concurrent reading and writing of data, efficient storage and access of massive data, high scalability and availability of data, etc.
Redis key can be String, hash, linked list, set and ordered set. There are many types of value, including String, list, set and zset. These data types all support push/pop, add/remove, fetch intersection and union, and more and richer operations. Redis also supports various sorting methods. In order to ensure efficiency, data is cached in memory. It can also periodically write updated data to disk or write modification operations to additional record files. What are the benefits of redis? For a simple example, see the following figure:

Redis cluster and MySQL are synchronized. First, data will be obtained from redis. If redis is suspended, then data will be obtained from mysql, so that the website will not be suspended. For more information about redis and its usage scenarios, Google and Baidu can do it. I won't go into details here.

2. Redis installation

This course is to install redis (centos 7) in vmvare virtual machine. If you have your own alicloud server, you can also install redis in alicloud. As long as you can ping the ip of the virtual machine or virtual machine, and then release the corresponding port (or turn off the firewall) in the virtual machine or virtual machine to access redis. Here's how to install redis:

  • Install gcc compilation

Since you need to compile redis later, you need to install gcc compilation first. By default, the alicloud host has installed gcc. If it is a virtual machine installed by itself, you need to install gcc first:

yum install gcc-c++
  • Download redis

There are two ways to download the installation package. One is to go to the official website to download (https://redis.io), and then test the installation package into centos. The other way is to directly use wget to download:

wget http://download.redis.io/releases/redis-3.2.8.tar.gz

If wget has not been installed, you can install it through the following command:

yum install wget
  • Decompression installation

Unzip the installation package:

tar –vzxf redis-3.2.8.tar.gz

Then put the redis-3.2.8 folder under / usr/local /. Generally, the installation software is under / usr/local. Then enter the folder / usr/local/redis-3.2.8/ and execute the make command to complete the installation.
[note] if make fails, you can try the following command:

make MALLOC=libc
make install
  • Modify configuration file

After the installation is successful, you need to modify the configuration file, including the ip allowed to access, allowing background execution, setting password, etc.
Open the redis configuration file: vi redis.conf
In the command mode, enter / bind to find the bind configuration. Press n to find the next one. After finding the configuration, configure the bind to 0.0.0.0, allowing any server to access redis, that is:

bind 0.0.0.0

Using the same method, change the daemonize to yes (no by default), allowing redis to execute in the background.
Open the requirepass comment and set the password to 123456 (the password is set by itself).

  • Start redis

In the redis-3.2.8 directory, specify the newly modified configuration file redis.conf to start redis:

redis-server ./redis.conf

Restart the redis client:

redis-cli

Since we have set the password, after starting the client, enter auth 123456 to log in to the client.
Then let's test and insert a data into redis:

set name CSDN

Then get name

get name

If CSDN is obtained normally, there is no problem.

3. Spring Boot integrated Redis

3.1 dependency import

Spring Boot is very convenient to integrate redis. You only need to import a redis starter dependency. As follows:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<!--Alibaba fastjson -->
<dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>fastjson</artifactId>
    <version>1.2.35</version>
</dependency>

Here we also import Alibaba's fastjson to save an entity later. In order to facilitate the conversion of the entity into a json string.

3.2 Redis configuration

After importing the dependency, we configure redis in the application.yml file:

server:
  port: 8080
spring:
  #redis related configuration
  redis:
    database: 5
    # To configure the host address of redis, you need to change it to your own
    host: 192.168.48.190
    port: 6379
    password: 123456
    timeout: 5000
    jedis:
      pool:
        # The maximum free connection in the connection pool, which is also 8 by default.
        max-idle: 500
        # The minimum free connection in the connection pool, which is also 0 by default.
        min-idle: 50
        # If the value is - 1, it means unlimited; if maxActive jedis instances have been allocated to the pool, the status of the pool is exhausted
        max-active: 1000
        # The maximum time to wait for an available connection, in milliseconds. The default value is - 1, which means never timeout. If the waiting time is exceeded, a JedisConnectionException is thrown directly
        max-wait: 2000

3.3 introduction to common api

Spring Boot's support for redis has been very perfect, and the rich api is enough for our daily development. Here I will introduce some of the most commonly used APIs for you to learn. Other APIs hope you can learn and study more by yourself. Just check it at the meeting.

There are two redis templates: RedisTemplate and StringRedisTemplate. We don't use RedisTemplate, which is provided to us to operate objects. When we operate objects, we usually store them in json format, but when we store them, we use redis's default internal serializer, which causes us to store things like random code. Of course, we can define serialization ourselves, but it's troublesome, so we use StringRedisTemplate template. StringRedisTemplate mainly provides string operations for us. We can convert entity classes to json strings, and after taking them out, they can also be converted to corresponding objects. That's why I imported Alibaba fastjson.

3.3.1 redis:string type

Create a RedisService, inject StringRedisTemplate, and use stringRedisTemplate.opsForValue() to get the value operations < string, string > object, through which you can read and write the redis database. As follows:

public class RedisService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    /**
     * set redis: string type
     * @param key key
     * @param value value
     */
    public void setString(String key, String value){
        ValueOperations<String, String> valueOperations = stringRedisTemplate.opsForValue();
        valueOperations.set(key, value);
    }

    /**
     * get redis: string type
     * @param key key
     * @return
     */
    public String getString(String key){
        return stringRedisTemplate.opsForValue().get(key);
    }

This object operates on string. We can also save entity classes. We only need to convert entity classes to json strings. Here's a test:

@RunWith(SpringRunner.class)
@SpringBootTest
public class Course14ApplicationTests {

    private static final Logger logger = LoggerFactory.getLogger(Course14ApplicationTests.class);

	@Resource
	private RedisService redisService;

	@Test
	public void contextLoads() {
        //Test the string type of redis
        redisService.setString("weichat","Programmer's private dishes");
        logger.info("My official account for WeChat is:{}", redisService.getString("weichat"));

        // If it's an entity, we can use the json tool to convert it to a json string,
        User user = new User("CSDN", "123456");
        redisService.setString("userInfo", JSON.toJSONString(user));
        logger.info("User information:{}", redisService.getString("userInfo"));
    }
}

First start redis, and then run the test case. Observe the log printed by the console as follows:

My WeChat official account is: programmer private food.
User information: {"password":"123456","username":"CSDN"}

3.3.2 redis:hash type

In fact, the principle of hash type is the same as string, but there are two keys. You can get hashoperations < string, object, Object > object by using stringRedisTemplate.opsForHash(). For example, we need to store order information. All order information is placed under the order. For order entities of different users, they can be distinguished by the user's id, which is equivalent to two keys.

@Service
public class RedisService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    /**
     * set redis: hash type
     * @param key key
     * @param filedKey filedkey
     * @param value value
     */
    public void setHash(String key, String filedKey, String value){
        HashOperations<String, Object, Object> hashOperations = stringRedisTemplate.opsForHash();
        hashOperations.put(key,filedKey, value);
    }

    /**
     * get redis: hash type
     * @param key key
     * @param filedkey filedkey
     * @return
     */
    public String getHash(String key, String filedkey){
        return (String) stringRedisTemplate.opsForHash().get(key, filedkey);
    }
}

It can be seen that hash and string are no different, but there are only a few more parameters. It is very simple and convenient to operate redis in Spring Boot. To test:

@SpringBootTest
public class Course14ApplicationTests {

    private static final Logger logger = LoggerFactory.getLogger(Course14ApplicationTests.class);

	@Resource
	private RedisService redisService;

	@Test
	public void contextLoads() {
        //Test the hash type of redis
        redisService.setHash("user", "name", JSON.toJSONString(user));
        logger.info("User name:{}", redisService.getHash("user","name"));
    }
}

3.3.3 redis:list type

Use stringRedisTemplate.opsForList() to get listoperations < string, string > listoperations redis list object. This list is a simple string list, which can be added from the left or the right. A list can contain at most 2 ^ 32 - 1 elements.

@Service
public class RedisService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    /**
     * set redis:list type
     * @param key key
     * @param value value
     * @return
     */
    public long setList(String key, String value){
        ListOperations<String, String> listOperations = stringRedisTemplate.opsForList();
        return listOperations.leftPush(key, value);
    }

    /**
     * get redis:list type
     * @param key key
     * @param start start
     * @param end end
     * @return
     */
    public List<String> getList(String key, long start, long end){
        return stringRedisTemplate.opsForList().range(key, start, end);
    }
}

As you can see, these APIs are all in the same form, convenient for memory and use. I will not expand the specific api details. You can read the api documents yourself. In fact, these APIs can also know what they do based on parameters and return values. To test:

@RunWith(SpringRunner.class)
@SpringBootTest
public class Course14ApplicationTests {

    private static final Logger logger = LoggerFactory.getLogger(Course14ApplicationTests.class);

	@Resource
	private RedisService redisService;

	@Test
	public void contextLoads() {
        //Test the list type of redis
        redisService.setList("list", "football");
        redisService.setList("list", "basketball");
        List<String> valList = redisService.getList("list",0,-1);
        for(String value :valList){
            logger.info("list There are:{}", value);
        }
    }
}

4. summary

This section mainly introduces the use scenario and installation process of redis, as well as the detailed steps of integrating redis in Spring Boot. In the actual project, redis is usually used as the cache. When querying the database, you will first search from redis. If there is any information, you will get it from redis. If there is no information, you will search from the database and synchronize it to redis. Next time, there will be redis. The same is true for updates and deletions, which need to be synchronized to redis. Redis is widely used in high concurrency scenarios.

Course source code download address: Poke me downloading

Lesson 15: integrating ActiveMQ in Spring Boot

1. Introduction to JMS and ActiveMQ

1.1 what is JMS

Baidu Encyclopedia's explanation:

JMS is the Java Message Service application program interface. It is an API for message oriented middleware (MOM) in the Java platform. It is used to send messages between two applications or in the distributed system for asynchronous communication. Java Message Service is a platform independent API, and most mom providers support JMS.

JMS is just an interface. Different providers or open source organizations have different implementations of it. ActiveMQ is one of them. It supports JMS and is launched by Apache. There are several object models in JMS:

Connection factory: ConnectionFactory
JMS Connection: Connection
JMS Session: Session
JMS destination: Destination
JMS Producer: Producer
JMS consumer: Consumer
There are two types of JMS messages: point-to-point and publish / subscribe.

It can be seen that JMS is similar to JDBC in fact. JDBC is an API that can be used to access many different relational databases, while JMS provides the same vendor independent access methods to access messaging services. This article mainly uses ActiveMQ.

1.2 ActiveMQ

ActiveMQ is a powerful open source message bus of Apache. ActiveMQ fully supports JMS 1.1 and J2EE 1.4 specifications. Although it has been a long time since the JMS specification was introduced, JMS still plays a special role in today's Java EE applications. ActiveMQ is used in the processing of asynchronous messages. The so-called asynchronous message means that the message sender does not need to wait for the processing and return of the message receiver, or even care whether the message is sent successfully.

There are two main types of asynchronous messages: queue and topic. Queue is used for point-to-point message communication and topic is used for publish / subscribe message communication. This chapter focuses on learning how to use these two forms of messages in Spring Boot.

2. ActiveMQ installation

To use ActiveMQ, you need to go to the official website to download. The official website address is: http://activemq.apache.org/
The version of this course is apache-activemq-5.15.3. After downloading, there will be a folder named apache-activemq-5.15.3. Yes, it's installed. It's very simple. It's ready to be used out of the box. Open the folder and you will see an activemq-all-5.15.3.jar. We can add this jar into the project, but we don't need this jar with maven.

Before using ActiveMQ, you need to start it first. There is a bin directory in the directory just unzipped, which contains two directories: win32 and win64. You can start ActiveMQ by selecting one of them and opening activemq.bat in the running directory according to your computer.
The message producer publishes the message to the queue, and then the message consumer takes it out of the queue and consumes the message. Note here: after the message is consumed by the consumer, there is no longer storage in the queue, so the message consumer cannot consume the message that has been consumed. Queue supports multiple message consumers, but only one consumer can consume a message
After startup, enter http://127.0.0.1:8161/admin/ in the browser to access the server of ActiveMQ. The user name and password are admin/admin. As follows:

We can see that there are two options, Queues and Topics, which are the view window of point-to-point message and publish / subscribe message respectively. What are peer-to-peer messages and publish / subscribe messages?

Point to point message: the message producer publishes the message to the queue, and then the message consumer takes it out of the queue and consumes the message. Note here: after the message is consumed by the consumer, there is no longer storage in the queue, so the message consumer cannot consume the message that has been consumed. Queue supports multiple message consumers, but only one consumer can consume a message.

Publish / subscribe message: the message producer (publish) publishes the message to the topic, and multiple message consumers (subscribe) consume the message at the same time. Unlike point-to-point, messages published to topic are consumed by all subscribers. The following analysis of the specific implementation.

3. ActiveMQ integration

3.1 dependency import and configuration

To integrate ActiveMQ in Spring Boot, you need to import the following starter dependency:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>

Then in the application.yml configuration file, configure activemq as follows:

spring:
  activemq:
  	# activemq url
    broker-url: tcp://localhost:61616
    in-memory: true
    pool:
      # If it is set to true here, you need to add a dependency package of ActiveMQ pool. Otherwise, automatic configuration will fail and JmsMessagingTemplate cannot be injected
      enabled: false

3.2 creation of queue and Topic

First, we need to create two kinds of messages: Queue and Topic. To create these two kinds of messages, we put them into ActiveMqConfig, as follows:

/**
 * activemq Configuration
 * @author  shengwu ni
 */
@Configuration
public class ActiveMqConfig {
    /**
     * Publish / subscribe mode queue name
     */
    public static final String TOPIC_NAME = "activemq.topic";
    /**
     * Point to point mode queue name
     */
    public static final String QUEUE_NAME = "activemq.queue";

    @Bean
    public Destination topic() {
        return new ActiveMQTopic(TOPIC_NAME);
    }

    @Bean
    public Destination queue() {
        return new ActiveMQQueue(QUEUE_NAME);
    }
}

You can see that two kinds of messages are created: Queue and Topic. new ActiveMQQueue and new ActiveMQTopic are used to create them respectively, and the names of the corresponding messages are followed. In this way, these two kinds of messages can be injected directly as components in other places.

3.3 message sending interface

In Spring Boot, we just need to inject JmsMessagingTemplate template to send messages quickly, as follows:

/**
 * message sender 
 * @author shengwu ni
 */
@Service
public class MsgProducer {

    @Resource
    private JmsMessagingTemplate jmsMessagingTemplate;

    public void sendMessage(Destination destination, String msg) {
        jmsMessagingTemplate.convertAndSend(destination, msg);
    }
}

The first parameter in the convertAndSend method is the destination of the message, and the second parameter is the specific message content.

3.4 point to point message production and consumption

3.4.1 production of point-to-point messages

For the production of messages, we put them into the Controller. Since the components of Queue messages have been generated above, we can directly inject them into the Controller. Then we call the above message sending method sendMessage to produce a message successfully.

/**
 * ActiveMQ controller
 * @author shengwu ni
 */
@RestController
@RequestMapping("/activemq")
public class ActiveMqController {

    private static final Logger logger = LoggerFactory.getLogger(ActiveMqController.class);

    @Resource
    private MsgProducer producer;
    @Resource
    private Destination queue;

    @GetMapping("/send/queue")
    public String sendQueueMessage() {

        logger.info("===Start sending point-to-point messages===");
        producer.sendMessage(queue, "Queue: hello activemq!");
        return "success";
    }
}

3.4.2 point to point message consumption

The consumption of point-to-point messages is very simple. As long as we specify a destination, the jms listener is always listening to whether there is a message coming from, and if there is, the consumption.

/**
 * Message consumer
 * @author shengwu ni
 */
@Service
public class QueueConsumer {

    /**
     * Receive point-to-point messages
     * @param msg
     */
    @JmsListener(destination = ActiveMqConfig.QUEUE_NAME)
    public void receiveQueueMsg(String msg) {
        System.out.println("The message received is:" + msg);
    }
}

It can be seen that the @ JmsListener annotation is used to specify the destination to listen to. Within the message receiving method, we can do the corresponding logical processing according to the specific business requirements.

3.4.3 test

Start the project, input: http://localhost:8081/activemq/send/queue in the browser, observe the output log of the console, and the following log appears to indicate that the message was sent and consumed successfully.

The message received is: Queue: hello activemq!

3.5 production and consumption of publish / subscribe messages

3.5.1 production of publish / subscribe messages

Just like point-to-point messages, we can send subscription messages by injecting topic and calling producer's sendMessage method, as follows:

@RestController
@RequestMapping("/activemq")
public class ActiveMqController {

    private static final Logger logger = LoggerFactory.getLogger(ActiveMqController.class);

    @Resource
    private MsgProducer producer;
    @Resource
    private Destination topic;

    @GetMapping("/send/topic")
    public String sendTopicMessage() {

        logger.info("===Start sending subscription message===");
        producer.sendMessage(topic, "Topic: hello activemq!");
        return "success";
    }
}

3.5.2 consumption of publish / subscribe messages

The consumption of publish / subscribe messages is different from that of peer-to-peer. Subscribe messages support multiple consumers to consume together. Secondly, the default point-to-point message in Spring Boot will not work when using topic. We need to add a configuration in the configuration file application.yml:

spring:
  jms:
    pub-sub-domain: true

If the configuration is false, it is a point-to-point message, which is also the default of Spring Boot. This can solve the problem, but if it is configured like this, the point-to-point messages mentioned above cannot be consumed normally. So we can't have both. This is not a good solution.

A better solution is to define a factory. The @ JmsListener annotation only receives queue messages by default. If you want to receive topic messages, you need to set containerFactory. We also add the following to the ActiveMqConfig configuration class above:

/**
 * activemq Configuration
 *
 * @author shengwu ni
 */
@Configuration
public class ActiveMqConfig {
    // Omit other content

    /**
     * JmsListener Annotation only receives queue messages by default. If you want to receive topic messages, you need to set containerFactory
     */
    @Bean
    public JmsListenerContainerFactory topicListenerContainer(ConnectionFactory connectionFactory) {
        DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
        factory.setConnectionFactory(connectionFactory);
        // Equivalent to configuration in application.yml: spring. JMS. Pub sub domain = true
        factory.setPubSubDomain(true);
        return factory;
    }
}

After this configuration, when we consume, we can specify this container factory in the @ JmsListener annotation to consume topic messages. As follows:

/**
 * Topic Message consumer
 * @author shengwu ni
 */
@Service
public class TopicConsumer1 {

    /**
     * Receive subscription message
     * @param msg
     */
    @JmsListener(destination = ActiveMqConfig.TOPIC_NAME, containerFactory = "topicListenerContainer")
    public void receiveTopicMsg(String msg) {
        System.out.println("The message received is:" + msg);
    }

}

Just specify the containerFactory property as our own configured topicListenerContainer. Because topic messages can be consumed in multiple ways, the consumption classes can be copied and tested together. Here, I will not paste the code. Please refer to my source code test.

3.5.3 test

Start the project, input: http://localhost:8081/activemq/send/topic in the browser, observe the output log of the console, and the following log appears to indicate that the message was sent and consumed successfully.

The message received is: Topic: hello activemq!
The message received is: Topic: hello activemq!

4. summary

This chapter mainly introduces the related concepts of jms and activemq, the installation and startup of activemq. The configuration, message production and consumption of point-to-point message and publish / subscribe message in Spring Boot are analyzed in detail. activemq is a powerful open source message bus, which is very useful for asynchronous message processing. I hope you can digest it.

Course source code download address: Poke me downloading

Lesson 16: integrating Shiro in Spring Boot

Shiro is a powerful, simple and easy-to-use Java security framework, mainly used for more convenient authentication, authorization, encryption, session management, etc., which can provide security for any application. This course mainly introduces Shiro's authentication and authorization functions.

1. Three core components of Shiro

Shiro has three core components: Subject, SecurityManager, and Realm. Let's first look at the relationship between them.

  1. Subject: authentication subject. It contains two pieces of information: Principles and Credentials. Take a look at what these two messages are.

Principles: identity. It can be user name, email, mobile phone number, etc., used to identify a login principal identity;
Credentials: credentials. There are passwords, digital certificates and so on.

To put it bluntly, it is something that needs to be authenticated. The most common one is the user name and password. For example, when a user logs in, Shiro needs to authenticate his / her identity, so he / she needs to authenticate the Subject.

  1. SecurityManager: security administrator. This is the core of Shiro architecture, just like the umbrella of all the original components inside Shiro. We usually configure the security manager in the project. Most of the developer's energy is on the Subject authentication Subject. When we interact with the Subject, it is actually the security manager that does some security operations behind it.

  2. Realms: realms is a domain, which is a bridge between Shiro and specific applications. When it needs to interact with security data, such as user accounts, access control, etc., Shiro will search from one or more realms. We usually customize the Realm ourselves, which will be explained in detail below.

1. Shiro identity and authority authentication

1.2 Shiro identity authentication

Let's analyze the process of Shiro identity authentication and take a look at an official authentication chart:

Step 1: after calling the Subject.login(token) method, the application code passes in the AuthenticationToken instance token representing the identity and credentials of the end user.

Step 2: delegate the Subject instance to the application's Security Manager (Shiro's security management) to start the actual authentication work. Here's the real certification work.

Step 3, 4, 5: then the security manager will perform security authentication according to the specific realm. As you can see from the figure, the realm can be customized (Custom Realm).

1.3 Shiro authority authentication

Authority authentication, namely access control, controls who can access which resources in the application. In authority authentication, the three core elements are: authority, role and user.

permission: the right to operate resources, such as visiting a page, adding, modifying, deleting and viewing data of a module;
Role: refers to the user's role. A role can have multiple permissions;
User: in Shiro, it represents the user accessing the system, i.e. the Subject authentication Subject mentioned above.

The relationship between them can be shown as follows:

A user can have multiple roles, and different roles can have different permissions or the same permissions. For example, there are three roles, one is a common role, two is a common role, three is an administrator, role 1 can only view information, role 2 can only add information, administrator can, and can also delete information, similar to this.

2. Spring Boot integrated Shiro process

2.1 dependency import

Spring Boot 2.0.3 integration Shiro needs to import the following starter dependency:

<dependency>
    <groupId>org.apache.shiro</groupId>
    <artifactId>shiro-spring</artifactId>
    <version>1.4.0</version>
</dependency>

2.2 database table data initialization

There are mainly three tables involved here: user table, role table and permission table. In fact, in demo, we can simulate it by ourselves without creating a table. But in order to get closer to the actual situation, we still add mybatis to operate the database. Here is the script for the database table.

CREATE TABLE `t_role` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'Primary key',
  `rolename` varchar(20) DEFAULT NULL COMMENT 'Role name',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8

CREATE TABLE `t_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'User primary key',
  `username` varchar(20) NOT NULL COMMENT 'User name',
  `password` varchar(20) NOT NULL COMMENT 'Password',
  `role_id` int(11) DEFAULT NULL COMMENT 'Foreign key Association role surface',
  PRIMARY KEY (`id`),
  KEY `role_id` (`role_id`),
  CONSTRAINT `t_user_ibfk_1` FOREIGN KEY (`role_id`) REFERENCES `t_role` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8

CREATE TABLE `t_permission` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'Primary key',
  `permissionname` varchar(50) NOT NULL COMMENT 'Permission name',
  `role_id` int(11) DEFAULT NULL COMMENT 'Foreign key Association role',
  PRIMARY KEY (`id`),
  KEY `role_id` (`role_id`),
  CONSTRAINT `t_permission_ibfk_1` FOREIGN KEY (`role_id`) REFERENCES `t_role` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8

Among them, t ﹣ user, t ﹣ role and t ﹣ permission store user information, role information and permission information respectively. After the table is established, we insert some test data into the table.
T? User table:

id username password role_id
1 csdn1 123456 1
2 csdn2 123456 2
3 csdn3 123456 3

Table:

id rolename
1 admin
2 teacher
3 student

t_permission table:

id permissionname role_id
1 user:* 1
2 student:* 2

Explain the permissions here: user: * indicates that the permissions can be user:create or other. The * indicates a placeholder. We can define it ourselves. For details, see Shiro configuration below.

2.2 custom Realm

With database tables and data, we start to customize the realm. To customize the realm, we need to inherit the authoringrealm class, which encapsulates many methods. It also inherits the realm class step by step. After inheriting the authoringrealm class, we need to rewrite two methods:

doGetAuthenticationInfo() method: used to verify the currently logged in user and obtain authentication information
doGetAuthorizationInfo() method: used to grant permissions and roles to the current successful login user

The specific implementation is as follows. I put related explanations in the code comments, which is more convenient and intuitive:

/**
 * Custom realm
 * @author shengwu ni
 */
public class MyRealm extends AuthorizingRealm {

    @Resource
    private UserService userService;

    @Override
    protected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principalCollection) {
        // Get user name
        String username = (String) principalCollection.getPrimaryPrincipal();
        SimpleAuthorizationInfo authorizationInfo = new SimpleAuthorizationInfo();
        // Set the role for the user. The role information is retrieved in the "t" role table
        authorizationInfo.setRoles(userService.getRoles(username));
        // Set permissions for the user. The permission information is retrieved in the T "permission table
        authorizationInfo.setStringPermissions(userService.getPermissions(username));
        return authorizationInfo;
    }

    @Override
    protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken authenticationToken) throws AuthenticationException {
        // Get the user name according to the token. If you don't know how to get the token, you can ignore it first. It will be explained later
        String username = (String) authenticationToken.getPrincipal();
        // Query the user from the database according to the user name
        User user = userService.getByUsername(username);
        if(user != null) {
            // Save the current user to the session
            SecurityUtils.getSubject().getSession().setAttribute("user", user);
            // Pass in the user name and password for authentication and return the authentication information
            AuthenticationInfo authcInfo = new SimpleAuthenticationInfo(user.getUsername(), user.getPassword(), "myRealm");
            return authcInfo;
        } else {
            return null;
        }
    }
}

It can be seen from the above two methods: when authenticating, the user corresponding to the user name is first found out from the database according to the user name entered by the user, and the password is not involved at this time, that is to say, even if the password entered by the user is not correct, the user can be found out, and then the correct information of the user can be encapsulated into authcInfo Return to Shiro in. Next is Shiro. It will verify the user name and password entered in the user's foreground according to the real information in it. At this time, it also needs to verify the password. If the verification passes, the user will log in. Otherwise, it will jump to the specified page. In the same way, during permission verification, the user obtains the roles and permissions related to the user name from the database according to the user name, and then encapsulates them into authorizationInfo and returns them to Shiro.

2.3 Shiro configuration

The custom realm is written, and Shiro needs to be configured next. We mainly configure three things: Custom realm, security manager and Shiro filter. As follows:

To configure a custom realm:

@Configuration
public class ShiroConfig {

    private static final Logger logger = LoggerFactory.getLogger(ShiroConfig.class);

    /**
     * Inject custom realm
     * @return MyRealm
     */
    @Bean
    public MyRealm myAuthRealm() {
        MyRealm myRealm = new MyRealm();
        logger.info("====myRealm Registration completed=====");
        return myRealm;
    }
}

To configure the security manager SecurityManager:

@Configuration
public class ShiroConfig {

    private static final Logger logger = LoggerFactory.getLogger(ShiroConfig.class);

    /**
     * Injection Security Manager
     * @return SecurityManager
     */
    @Bean
    public SecurityManager securityManager() {
        // Add custom realm
        DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(myAuthRealm());
        logger.info("====securityManager Registration completed====");
        return securityManager;
    }
}

When configuring the security manager, you need to add the above custom realm, so that Shiro can go to the custom realm.

To configure Shiro filters:

@Configuration
public class ShiroConfig {

    private static final Logger logger = LoggerFactory.getLogger(ShiroConfig.class);
    
    /**
     * Injection Shiro filter
     * @param securityManager Security Manager
     * @return ShiroFilterFactoryBean
     */
    @Bean
    public ShiroFilterFactoryBean shiroFilter(SecurityManager securityManager) {
        // Define shiroFactoryBean
        ShiroFilterFactoryBean shiroFilterFactoryBean=new ShiroFilterFactoryBean();

        // Setting up a custom securityManager
        shiroFilterFactoryBean.setSecurityManager(securityManager);

        // Set the default login url, which will be accessed if authentication fails
        shiroFilterFactoryBean.setLoginUrl("/login");
        // Set the link to jump after success
        shiroFilterFactoryBean.setSuccessUrl("/success");
        // Set the unauthorized interface. If the permission authentication fails, the url will be accessed
        shiroFilterFactoryBean.setUnauthorizedUrl("/unauthorized");

        // LinkedHashMap is in order, and the order interceptor is configured
        Map<String,String> filterChainMap = new LinkedHashMap<>();

        // Configure the addresses that can be accessed anonymously. You can add and release some static resources according to the actual situation. anon means release
        filterChainMap.put("/css/**", "anon");
        filterChainMap.put("/imgs/**", "anon");
        filterChainMap.put("/js/**", "anon");
        filterChainMap.put("/swagger-*/**", "anon");
        filterChainMap.put("/swagger-ui.html/**", "anon");
        // Login url release
        filterChainMap.put("/login", "anon");

        // "/user/admin" Authentication is required at the beginning. authc means authentication
        filterChainMap.put("/user/admin*", "authc");
        // "/user/student" Role authentication is required at the beginning. Only "admin" is allowed
        filterChainMap.put("/user/student*/**", "roles[admin]");
        // Permission authentication is required at the beginning of "/ user/teacher". Only "user:create" is allowed
        filterChainMap.put("/user/teacher*/**", "perms[\"user:create\"]");

        // Configure logout filter
        filterChainMap.put("/logout", "logout");

        // Set up shiroFilterFactoryBean Of FilterChainDefinitionMap
        shiroFilterFactoryBean.setFilterChainDefinitionMap(filterChainMap);
        logger.info("====shiroFilterFactoryBean Registration completed====");
        return shiroFilterFactoryBean;
    }
}

When Shiro filter is configured, a security manager will be passed in. As you can see, this is a loop by loop, reall - > SecurityManager - > filter. In the filter, we need to define a shiroFactoryBean, and then add the SecurityManager. Combining with the above code, we can see that the main things to configure are:

Default login url: the url will be accessed if authentication fails
url to jump after successful authentication
If the authority authentication fails, the url will be accessed
URLs to be blocked or released: all in one map

From the above code, we can see that in map, there are different permission requirements for different URLs. Here is a summary of several commonly used permissions.

Filter Explain
anon Open access, which can be understood as anonymous users or tourists, and can be accessed directly
authc Require authentication
logout Log out. After execution, you will directly jump to the url set by shiroFilterFactoryBean.setLoginUrl(); that is, the login page
roles[admin] Multiple parameters can be written, indicating that one or some roles can pass. When multiple parameters are written, roles ["admin, user"]. When there are multiple parameters, each parameter must pass before passing
perms[user] Multiple parameters can be written, indicating that one or some permissions are required to pass. When multiple parameters are written, perms ["user, admin"] is written. When there are multiple parameters, each parameter must pass before passing

2.4 use Shiro for certification

Here, we have finished the preparation of Shiro, and then we start to use Shiro for authentication. Let's design several interfaces first:

Interface 1: use http://localhost:8080/user/admin to verify authentication
Interface 2: use http://localhost:8080/user/student to verify role authentication
Interface 3: use http://localhost:8080/user/teacher to verify authority authentication
Interface 4: use http://localhost:8080/user/login to log in users

Then come to the certification process:

Process 1: direct access interface 1 (not logged in at this time), authentication fails, jump to login.html page to let users log in, login will request interface 4 to realize user login function, at this time Shiro has saved user information.
Process 2: access interface 1 again (at this time, the user has logged in), the authentication is successful, jump to the success.html page, and display the user information.
Process 3: access interface 2: test whether role authentication is successful.
Process 4: access interface 3: test whether authority authentication is successful.

2.4.1 identity, role and authority authentication interface

@Controller
@RequestMapping("/user")
public class UserController {

    /**
     * Authentication test interface
     * @param request
     * @return
     */
    @RequestMapping("/admin")
    public String admin(HttpServletRequest request) {
        Object user = request.getSession().getAttribute("user");
        return "success";
    }

    /**
     * Role authentication test interface
     * @param request
     * @return
     */
    @RequestMapping("/student")
    public String student(HttpServletRequest request) {
        return "success";
    }

    /**
     * Authority authentication test interface
     * @param request
     * @return
     */
    @RequestMapping("/teacher")
    public String teacher(HttpServletRequest request) {
        return "success";
    }
}

These three interfaces are very simple. You can directly return to the specified page for display. As long as the authentication is successful, you will jump to the normal page. If the authentication fails, you will jump to the page configured in shreoconfig above for display.

2.4.2 user login interface

@Controller
@RequestMapping("/user")
public class UserController {

    /**
     * User login interface
     * @param user user
     * @param request request
     * @return string
     */
    @PostMapping("/login")
    public String login(User user, HttpServletRequest request) {

        // Create token based on user name and password
        UsernamePasswordToken token = new UsernamePasswordToken(user.getUsername(), user.getPassword());
        // Get subject authentication subject
        Subject subject = SecurityUtils.getSubject();
        try{
            // Start authentication, this step will jump to our custom realm
            subject.login(token);
            request.getSession().setAttribute("user", user);
            return "success";
        }catch(Exception e){
            e.printStackTrace();
            request.getSession().setAttribute("user", user);
            request.setAttribute("error", "Wrong user name or password!");
            return "login";
        }
    }
}

We will focus on the analysis of this login interface. First, we will create a token based on the user name and password passed from the front end, then use SecurityUtils to create an authentication principal. Next, we will call subject.login(token) to start authentication. Note that the newly created token is passed here Token, as described in the comments, this step will jump to our custom realm and enter the doGetAuthenticationInfo method, so here, you will understand the token parameter in the method. Then, as analyzed above, identity authentication begins.

2.4.3 test

Finally, start the project and test:
When the browser requests http://localhost:8080/user/admin, it will perform identity authentication. Because it is not logged in at this time, it will jump to the / login interface in IndexController, and then jump to the login.html page for us to log in. After logging in with the user name and password of csdn/123456, we will request http://localhost:8080/user/student in the browser Interface, the role authentication will be performed. Because the user role of csdn1 in the database is admin, it is consistent with that in the configuration, and the authentication is passed. We request the http://localhost:8080/user/teacher interface again, and the authority authentication will be performed, because the user authority of csdn1 in the database is user: *, which meets the user:create in the configuration, so the authentication is passed.

Next, we click exit, the system will log out and let us log in again. We use csdn2 as the user to log in, and repeat the above operations. When the two steps of role authentication and authority authentication are carried out, the authentication fails. Because the corner color and authority of csdn2 in the database are different from those in the configuration, the authentication fails.

3. summary

This section mainly introduces the integration of Shiro security framework and Spring Boot. First, it introduces the three core components of Shiro and their functions; then it introduces Shiro's identity authentication, role authentication and authority authentication; finally, it introduces how to integrate Shiro in Spring Boot in detail with the code, and designs a set of testing process, and analyzes Shiro's working process and principle step by step, so that readers can experience Shiro more intuitively The whole working process. Shiro is widely used. We hope that readers can master it and apply it to practical projects.

Course source code download address: Poke me downloading

Lesson 17: integrating Lucence in Spring Boot

1. Lucence and full text search

What is Lucene? Take a look at Baidu Encyclopedia:

Lucene is an open source library for full-text search and search, supported and provided by the Apache Software Foundation. Lucene provides a simple but powerful application interface that can do full-text indexing and searching. Lucene is a mature free open source tool in java development environment. In itself, Lucene is the most popular free Java information retrieval library at present and in recent years. ——Baidu Encyclopedia

1.1 full text search

The concept of full-text retrieval is mentioned here. Let's first analyze what full-text retrieval is. After understanding full-text retrieval, it's very simple to understand the principle of Lucene.

What is full text retrieval? For example, if you want to find a string in a file, the most direct idea is to retrieve it from scratch. If you find it, it's OK. This is very practical for small data files, but it's a bit laborious for large data files. Or to find a file containing a string. The same is true. If you find a hard disk with dozens of G's, it's very inefficient.

The data in the file belongs to unstructured data, that is to say, it has no structure to speak of. To solve the efficiency problem mentioned above, first we need to extract part of the information in the unstructured data, reorganize it, make it have a certain structure, and then search these data with a certain structure, so as to achieve the purpose of relatively fast search. This is called full-text search. That is, the process of building index first and then searching index.

1.2 Lucene's indexing method

So how is the index built in Lucene? Suppose there are two articles, as follows:

The content of Article 1 is: Tom lives in Guangzhou, I live in Guangzhou too
The content of Article 2 is: He once lived in Shanghai

The first step is to pass the document to the Tokenizer, which will divide the document into words and remove punctuation and stop words. The so-called stop words refer to words without special meaning, such as a, the, too and so on in English. After segmentation, we get the Token. As follows:

Article 1 results after word segmentation: [Tom] [lives] [Guangzhou] [I] [live] [Guangzhou]
The result of Article 2 after word segmentation: [He] [lives] [Shanghai]

Then the lexical element is passed to the language processing component. For English, the language processing component will generally change the letters to lowercase, reduce the words to root form, such as "lives" to "live", and change the words to root form, such as "drop" to "drive". Then get the word (Term). As follows:

Article 1 processed result: [tom] [live] [guangzhou] [i] [live] [guangzhou]
Article 2 processed results: [he] [live] [shanghai]

Finally, the obtained words are passed to the index component (Indexer), which is processed to get the following index structure:

Key word Article number [frequency] Emerging position
guangzhou 1[2] 3,6
he 2[1] 1
i 1[1] 4
live 1[2],2[1] 2,5,2
shanghai 2[1] 3
tom 1[1] 1

This is the core part of Lucene index structure. Its keywords are arranged in character order, so Lucene can quickly locate keywords with binary search algorithm. During implementation, Lucene saves the above three columns as Term Dictionary, frequency and positions files. The dictionary file not only stores every keyword, but also keeps the pointer to the frequency file and location file. Through the pointer, we can find the frequency information and location information of the keyword.
The process of searching is to search and find the word in binary dictionary, read out all article numbers through the pointer to frequency file, and then return the results, and then find the word in specific articles according to the location. So Lucene may be slow to build index for the first time, but it will not need to build index every time in the future, it will be fast.

Understand the word segmentation principle of Lucene. Next, we integrate Lucene in Spring Boot and implement the functions of index and search.

2. Lucence integration in spring boot

2.1 dependency import

First, you need to import the dependency of Lucene. There are several dependencies, as follows:

<!-- Lucence Core package -->
<dependency>
	<groupId>org.apache.lucene</groupId>
	<artifactId>lucene-core</artifactId>
	<version>5.3.1</version>
</dependency>

<!-- Lucene Query resolution package -->
<dependency>
	<groupId>org.apache.lucene</groupId>
	<artifactId>lucene-queryparser</artifactId>
	<version>5.3.1</version>
</dependency>

<!-- Regular participle -->
<dependency>
	<groupId>org.apache.lucene</groupId>
	<artifactId>lucene-analyzers-common</artifactId>
	<version>5.3.1</version>
</dependency>

<!--Support segmentation highlighting  -->
<dependency>
	<groupId>org.apache.lucene</groupId>
	<artifactId>lucene-highlighter</artifactId>
	<version>5.3.1</version>
</dependency>

<!--Support Chinese word segmentation  -->
<dependency>
	<groupId>org.apache.lucene</groupId>
	<artifactId>lucene-analyzers-smartcn</artifactId>
	<version>5.3.1</version>
</dependency>

The last dependency is used to support Chinese word segmentation, because English is supported by default. The highlighted segmentation dependency is that I want to do a search at the end, and then highlight the searched content to simulate the current practice on the Internet, which can be applied to actual projects.

2.2 quick start

According to the above analysis, there are two steps in full-text retrieval, first index, and then retrieval. So in order to test this process, I create two new java classes, one for indexing and the other for retrieval.

2.2.1 indexing

Let's make a few files ourselves, put them in the D:\lucene\data directory, and create a new Indexer class to realize the indexing function. First, the standard word breaker and index instance are initialized in the construction method.

public class Indexer {

    /**
     * Write index instance
     */
    private IndexWriter writer;

    /**
     * Constructor, instantiating IndexWriter
     * @param indexDir
     * @throws Exception
     */
    public Indexer(String indexDir) throws Exception {
        Directory dir = FSDirectory.open(Paths.get(indexDir));
        //The standard word breaker will automatically remove the space, is a the and other words
        Analyzer analyzer = new StandardAnalyzer();
        //Match standard word breaker to write index configuration
        IndexWriterConfig config = new IndexWriterConfig(analyzer);
        //Instantiate write index object
        writer = new IndexWriter(dir, config);
    }
}

In the construction and distribution, pass a path to the folder where the index is stored, then build a standard word breaker (this is English), and then use the standard word breaker to instantiate the write index object. Next, I'll start to build the index. I'll put the explanation into the code annotation for you to follow up.

/**
 * Index all files in the specified directory
 * @param dataDir
 * @return
 * @throws Exception
 */
public int indexAll(String dataDir) throws Exception {
    // Get all files under this path
    File[] files = new File(dataDir).listFiles();
    if (null != files) {
        for (File file : files) {
            //Call the indexFile method below to index each file
            indexFile(file);
        }
    }
    //Number of files returned to index
    return writer.numDocs();
}

/**
 * Index the specified file
 * @param file
 * @throws Exception
 */
private void indexFile(File file) throws Exception {
    System.out.println("Path to index file:" + file.getCanonicalPath());
    //Call the following getDocument method to get the document of the file
    Document doc = getDocument(file);
    //Add doc to index
    writer.addDocument(doc);
}

/**
 * Get the document, and then set each field in the document, which is similar to a row of records in the database
 * @param file
 * @return
 * @throws Exception
 */
private Document getDocument(File file) throws Exception {
    Document doc = new Document();
    //Start adding fields
    //Add content
    doc.add(new TextField("contents", new FileReader(file)));
    //Add the file name and save this field in the index file
    doc.add(new TextField("fileName", file.getName(), Field.Store.YES));
    //Add file path
    doc.add(new TextField("fullPath", file.getCanonicalPath(), Field.Store.YES));
    return doc;
}

In this way, the index is established. In this class, we write a main method to test:

public static void main(String[] args) {
        //Path to which index is saved
        String indexDir = "D:\\lucene";
        //Directory of file data to be indexed
        String dataDir = "D:\\lucene\\data";
        Indexer indexer = null;
        int indexedNum = 0;
        //Record index start time
        long startTime = System.currentTimeMillis();
        try {
            // Start building index
            indexer = new Indexer(indexDir);
            indexedNum = indexer.indexAll(dataDir);
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            try {
                if (null != indexer) {
                    indexer.close();
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
        //Record index end time
        long endTime = System.currentTimeMillis();
        System.out.println("Index time consuming" + (endTime - startTime) + "Millisecond");
        System.out.println("A total index." + indexedNum + "File");
    }

I have two files related to tomcat under D:\lucene\data. After execution, I see the console output:

Path of index file: D:\lucene\data\catalina.properties
 Path of index file: D:\lucene\data\logging.properties
 Index time 882 MS
 2 files indexed

Then we can see some index files in the directory D:\lucene \. These files can't be deleted. If they are deleted, we need to rebuild the index. Otherwise, without the index, we can't retrieve the content.

####2.2.2 search content

Now that the indexes of these two files are established, we can write a retrieval program to find specific words in these two files.

public class Searcher {

    public static void search(String indexDir, String q) throws Exception {

        //Get the path to query, that is, the location of the index
        Directory dir = FSDirectory.open(Paths.get(indexDir));
        IndexReader reader = DirectoryReader.open(dir);
        //Building IndexSearcher
        IndexSearcher searcher = new IndexSearcher(reader);
        //The standard word breaker will automatically remove the space, is a the and other words
        Analyzer analyzer = new StandardAnalyzer();
        //query parser 
        QueryParser parser = new QueryParser("contents", analyzer);
        //Get the query object by parsing the String to be queried. q is the String to be queried passed in
        Query query = parser.parse(q);

        //Record index start time
        long startTime = System.currentTimeMillis();
        //Start query, query the first 10 data, and save the records in docs
        TopDocs docs = searcher.search(query, 10);
        //Record index end time
        long endTime = System.currentTimeMillis();
        System.out.println("matching" + q + "Total time consuming" + (endTime-startTime) + "Millisecond");
        System.out.println("Query to" + docs.totalHits + "Bar record");

        //Take out each query result
        for(ScoreDoc scoreDoc : docs.scoreDocs) {
            //scoreDoc.doc is equivalent to docID, which is used to obtain documents
            Document doc = searcher.doc(scoreDoc.doc);
            //fullPath is a field that we defined when we just set up the index to represent the path. Other content can also be taken, as long as we have a definition when building the index.
            System.out.println(doc.get("fullPath"));
        }
        reader.close();
    }
}

ok, so we have finished the code we searched. Each step explains the comments I wrote in the code. Write a main method to test it:

public static void main(String[] args) {
    String indexDir = "D:\\lucene";
    //Query this string
    String q = "security";
    try {
        search(indexDir, q);
    } catch (Exception e) {
        e.printStackTrace();
    }
}

Check the security string, and execute the following to see the result printed by the console:

Matching security takes 23 milliseconds
 1 record found
D:\lucene\data\catalina.properties

As you can see, it took 23 milliseconds to find the security string in two files and output the file name. I wrote the above code in detail. This code is quite complete and can be used in the production environment.

2.3 practice of Chinese segmentation retrieval

The above has written the code of index and retrieval, but in the actual project, we often combine the page to display some query results, such as I want to find a keyword, after finding it, display the relevant information points, and highlight the keyword of the query, etc. This kind of demand is very common in the actual project, and most websites will have this effect. So in this section, we use Lucene to achieve this effect.

2.3.1 Chinese word segmentation

We create a new Chinese indexer class to build a Chinese index. The building process is the same as that of an English index. The difference is that we use a Chinese word breaker. In addition, we don't need to read the file to build the index here. We simulate using string to build, because in the actual project, most of the cases are to get some text strings, and then query the relevant content according to some keywords and so on. The code is as follows:

public class ChineseIndexer {

    /**
     * Where to store index
     */
    private Directory dir;

    //Prepare the data for testing
    //Used to identify documents
    private Integer ids[] = {1, 2, 3};
    private String citys[] = {"Shanghai", "Nanjing", "Qingdao"};
    private String descs[] = {
            "Shanghai is a prosperous city.",
            "Nanjing is a cultural city, Nanjing for short, is the capital of Jiangsu Province, located in the eastern part of China, the lower reaches of the Yangtze River, near the river and offshore. The city has 11 districts, with a total area of 6597 square kilometers. In 2013, the built-up area is 752.83 Square kilometers, permanent population 818.78 10000, including 659 urban population.1 Ten thousand people.[1-4] "Nanjing has a history of more than 6000 years of civilization, nearly 2600 years of city construction and nearly 500 years of capital construction. It is one of the four ancient capitals of China, known as the "ancient capital of Six Dynasties" and "capital of ten dynasties". It is an important birthplace of Chinese civilization. It has blessed zhengshuo of China for several times in history. It has long been the political, economic and cultural center of South China, with a thick history Heavy cultural heritage and rich historical remains.[5-7] Nanjing is an important science and education center of the country. Since ancient times, it has been a city that worships culture and education. It has the reputation of "the world's cultural hub" and "the first school in the Southeast". As of 2013, there are 75 colleges and universities in Nanjing, 8 of which are 211 colleges and universities, second only to Beijing and Shanghai; 25 national key laboratories, 169 national key disciplines, 83 academicians of the two academies, ranking the third in China.[8-10] . ",
            "Qingdao is a beautiful city."
    };

    /**
     * Generate index
     * @param indexDir
     * @throws Exception
     */
    public void index(String indexDir) throws Exception {
        dir = FSDirectory.open(Paths.get(indexDir));
        // Call getWriter to get IndexWriter object first
        IndexWriter writer = getWriter();
        for(int i = 0; i < ids.length; i++) {
            Document doc = new Document();
            // Index the above data, and identify them with id, city and desc respectively
            doc.add(new IntField("id", ids[i], Field.Store.YES));
            doc.add(new StringField("city", citys[i], Field.Store.YES));
            doc.add(new TextField("desc", descs[i], Field.Store.YES));
            //Add document
            writer.addDocument(doc);
        }
        //It's close d before it's actually written to the document
        writer.close();
    }

    /**
     * Get IndexWriter instance
     * @return
     * @throws Exception
     */
    private IndexWriter getWriter() throws Exception {
        //Use Chinese word breaker
        SmartChineseAnalyzer analyzer = new SmartChineseAnalyzer();
        //Match Chinese word breaker to write index configuration
        IndexWriterConfig config = new IndexWriterConfig(analyzer);
        //Instantiate write index object
        IndexWriter writer = new IndexWriter(dir, config);
        return writer;
    }

    public static void main(String[] args) throws Exception {
        new ChineseIndexer().index("D:\\lucene2");
    }
}

Here we use id, city and desc to represent id, city name and city description respectively, and use them as keywords to build index. Later, when we get content, we mainly get city description. The description of Nanjing is a little longer on purpose, because in the following retrieval, different parts of information will be retrieved according to different keywords, and there is a concept of weight in it.
Then execute the main method to save the index to D:\lucene2 \.

2.3.2 Chinese word segmentation query

The logic of Chinese word segmentation query code is similar to that of the default query. There are some differences: we need to mark the keywords found red and bold, and need to calculate a score segment. What does this mean? For example, if I search "Nanjing Culture" and "Nanjing civilization", the two search results should be different according to the location of the keywords, which will be tested later. Let's take a look at the code and comments:

public class ChineseSearch {

    private static final Logger logger = LoggerFactory.getLogger(ChineseSearch.class);

    public static List<String> search(String indexDir, String q) throws Exception {

        //Get the path to query, that is, the location of the index
        Directory dir = FSDirectory.open(Paths.get(indexDir));
        IndexReader reader = DirectoryReader.open(dir);
        IndexSearcher searcher = new IndexSearcher(reader);
        //Use Chinese word breaker
        SmartChineseAnalyzer analyzer = new SmartChineseAnalyzer();
        //Query resolver initiated by Chinese word breaker
        QueryParser parser = new QueryParser("desc", analyzer);
        //Get the query object by parsing the String to be queried
        Query query = parser.parse(q);

        //Record index start time
        long startTime = System.currentTimeMillis();
        //Start query, query the first 10 data, and save the records in docs
        TopDocs docs = searcher.search(query, 10);
        //Record index end time
        long endTime = System.currentTimeMillis();
        logger.info("matching{}Total time consuming{}Millisecond", q, (endTime - startTime));
        logger.info("Query to{}Bar record", docs.totalHits);

        //If no parameter is specified, the default is bold, that is, < b > < B / >
        SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<b><font color=red>","</font></b>");
        //Calculate the score according to the query object, and the highest score of a query result will be initialized
        QueryScorer scorer = new QueryScorer(query);
        //Calculate a segment based on this score
        Fragmenter fragmenter = new SimpleSpanFragmenter(scorer);
        //Highlight the keywords in this clip in the highlighted format initialized above
        Highlighter highlighter = new Highlighter(simpleHTMLFormatter, scorer);
        //Set the clip to display
        highlighter.setTextFragmenter(fragmenter);

        //Take out each query result
        List<String> list = new ArrayList<>();
        for(ScoreDoc scoreDoc : docs.scoreDocs) {
            //scoreDoc.doc is equivalent to docID, which is used to obtain documents
            Document doc = searcher.doc(scoreDoc.doc);
            logger.info("city:{}", doc.get("city"));
            logger.info("desc:{}", doc.get("desc"));
            String desc = doc.get("desc");

            //Display highlights
            if(desc != null) {
                TokenStream tokenStream = analyzer.tokenStream("desc", new StringReader(desc));
                String summary = highlighter.getBestFragment(tokenStream, desc);
                logger.info("Highlighting desc:{}", summary);
                list.add(summary);
            }
        }
        reader.close();
        return list;
    }
}

I wrote detailed notes for each step, so I won't go into details here. Next let's test the effect.

2.3.3 test

Here we use thmeleaf to write a simple page to show the acquired data and highlight it. In the controller, we specify the index directory and the string to query, as follows:

@Controller
@RequestMapping("/lucene")
public class IndexController {

    @GetMapping("/test")
    public String test(Model model) {
        // Directory of index
        String indexDir = "D:\\lucene2";
        // Characters to query
//        String q = "Nanjing civilization";
        String q = "Nanjing culture";
        try {
            List<String> list = ChineseSearch.search(indexDir, q);
            model.addAttribute("list", list);
        } catch (Exception e) {
            e.printStackTrace();
        }
        return "result";
    }
}

Directly return to the result.html page, which mainly shows the data in the model.

<!DOCTYPE html>
<html lang="en" xmlns:th="http://www.thymeleaf.org">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>
<div th:each="desc : ${list}">
    <div th:utext="${desc}"></div>
</div>
</body>
</html>

Note here that th:test cannot be used, otherwise the html tags in the string will be escaped and will not be rendered to the page. Next, start the service and enter http://localhost:8080/lucene/test in the browser to test the effect. We search for "Nanjing Culture".

Then change the search keyword in the controller to "Nanjing civilization" to see the effect of hit.

It can be seen that different keywords will calculate a score segment, that is to say, different keywords will hit the content in different positions, and then highlight the keywords according to the form we set ourselves. It can be seen from the results that Lucene can also split and hit keywords intelligently, which will be very useful in actual projects.

3. summary

This lesson first analyzes the theoretical rules of full-text retrieval in detail, and then combines with Lucene to systematically describe the integration steps in Spring Boot. First, it quickly leads you to intuitively feel how Lucene establishes the index if retrieval. Secondly, it shows the extensive application of Lucene in full-text retrieval through specific examples of Chinese retrieval. Lucene is not difficult, mainly because there are many steps. The code doesn't need to memorize. You can make corresponding changes in the project according to the actual situation.

Course source code download address: Poke me downloading

Lesson 18: Spring Boot builds the architecture in the actual project development

In the previous course, I mainly explained some commonly used technical points in Spring Boot to you. These technical points may not be fully used in the actual project, because different projects may use different technologies, but I hope you can master how to use them and expand them according to the needs of the actual project.

I don't know that you don't know the single chip microcomputer. There is a minimum system in the single chip microcomputer. After the minimum system is built, it can be expanded artificially on this basis. What we need to do in this lesson is to build a "Spring Boot minimum system architecture". With this architecture, we can expand it according to the actual needs.

To build an environment from scratch, we need to consider several points: the data structure of unified encapsulation, adjustable interface, json processing, and the use of template engine (this article does not write this item, because most of the projects are now separated from the front and back ends, but considering that there are also projects that are not separated from the front and back ends, so I also added them to the source code Thmeleaf), integration of the persistence layer, interceptors (which are also optional), and global exception handling. Generally speaking, a Spring Boot project environment is almost the same, and then it is expanded according to the specific situation.

Combined with the previous courses and the above points, this lesson will lead you to build a Spring Boot architecture available in the actual project development. The whole project project is shown in the figure below. When learning, you can combine my source code, so the effect will be better.

1. Unified data encapsulation

Because the type of encapsulated json data is uncertain, we need to use generics when defining a unified json structure. In the unified json structure, the attributes include data, status code and prompt information. The construction method can be added according to the actual business needs. Generally speaking, there should be a default return structure and a user specified return structure. As follows:

/**
 * Unified return object
 * @author shengwu ni
 * @param <T>
 */
public class JsonResult<T> {

    private T data;
    private String code;
    private String msg;

    /**
     * If no data is returned, the default status code is 0, and the prompt message is: operation succeeded!
     */
    public JsonResult() {
        this.code = "0";
        this.msg = "Operation succeeded!";
    }

    /**
     * If no data is returned, the status code and prompt information can be specified manually
     * @param code
     * @param msg
     */
    public JsonResult(String code, String msg) {
        this.code = code;
        this.msg = msg;
    }

    /**
     * When data is returned, the status code is 0, and the default prompt is: operation succeeded!
     * @param data
     */
    public JsonResult(T data) {
        this.data = data;
        this.code = "0";
        this.msg = "Operation succeeded!";
    }

    /**
     * There is data return, status code is 0, and prompt information is specified manually
     * @param data
     * @param msg
     */
    public JsonResult(T data, String msg) {
        this.data = data;
        this.code = "0";
        this.msg = msg;
    }
    
    /**
     * Use custom exception as parameter to pass status code and prompt information
     * @param msgEnum
     */
    public JsonResult(BusinessMsgEnum msgEnum) {
        this.code = msgEnum.code();
        this.msg = msgEnum.msg();
    }

    // Omit get and set methods
}

You can reasonably modify the field information in the unified structure according to some things you need in your project.

2. json processing

There are many Json processing tools, such as Alibaba's fastjson, but fastjson can't convert some unknown null types into empty strings, which may be fastjson's own defect, and its scalability is not very good, but it is convenient to use, and there are many users. In this lesson, we mainly integrate the jackson that comes with Spring Boot. It is mainly to configure null for jackson, and then it can be used in the project.

/**
 * jacksonConfig
 * @author shengwu ni
 */
@Configuration
public class JacksonConfig {
    @Bean
    @Primary
    @ConditionalOnMissingBean(ObjectMapper.class)
    public ObjectMapper jacksonObjectMapper(Jackson2ObjectMapperBuilder builder) {
        ObjectMapper objectMapper = builder.createXmlMapper(false).build();
        objectMapper.getSerializerProvider().setNullValueSerializer(new JsonSerializer<Object>() {
            @Override
            public void serialize(Object o, JsonGenerator jsonGenerator, SerializerProvider serializerProvider) throws IOException {
                jsonGenerator.writeString("");
            }
        });
        return objectMapper;
    }
}

We will not test it here, and we will test it together after swagger2 is configured.

3. swagger2 online adjustable interface

With swagger, developers don't need to provide interface documents to other people. Just tell them a swagger address to display the online API interface documents. In addition, the people who call the interface can also test the interface data online. Similarly, when developers develop the interface, they can also use swagger Online interface documents test interface data, which provides convenience for developers. To use swagger, you need to configure it:

/**
 * swagger To configure
 * @author shengwu ni
 */
@Configuration
@EnableSwagger2
public class SwaggerConfig {

    @Bean
    public Docket createRestApi() {
        return new Docket(DocumentationType.SWAGGER_2)
                // Specify how to build the details of the api document: apiInfo()
                .apiInfo(apiInfo())
                .select()
                // Specify the package path to generate the api interface. Here, take the controller as the package path to generate all interfaces in the controller
                .apis(RequestHandlerSelectors.basePackage("com.itcodai.course18.controller"))
                .paths(PathSelectors.any())
                .build();
    }

    /**
     * Build api documentation details
     * @return
     */
    private ApiInfo apiInfo() {
        return new ApiInfoBuilder()
                // Set page title
                .title("Spring Boot Build the framework of development in the actual project")
                // Set interface description
                .description("Learn with brother Wu Spring Boot The eighteenth lesson")
                // Set contact
                .contact("Ni Shengwu," + "WeChat official account: programmers' private dishes")
                // Set version
                .version("1.0")
                // structure
                .build();
    }
}

Here, you can test it first, write a Controller, and make a static interface to test the integrated content above.

@RestController
@Api(value = "User information interface")
public class UserController {

    @Resource
    private UserService userService;

    @GetMapping("/getUser/{id}")
    @ApiOperation(value = "Get user information according to user unique ID")
    public JsonResult<User> getUserInfo(@PathVariable @ApiParam(value = "User unique identification") Long id) {
        User user = new User(id, "Ni Sheng Wu", "123456");
        return new JsonResult<>(user);
    }
}

Then start the project, enter localhost:8080/swagger-ui.html in the browser to see the document page of swagger interface, and call the above interface to see the returned json data.

4. Persistence layer integration

Each project must have a persistence layer to interact with the database. Here we mainly integrate mybatis. To integrate mybatis, we must first configure it in application.yml.

# Service port number
server:
  port: 8080

# Database address
datasource:
  url: localhost:3306/blog_test

spring:
  datasource: # Database configuration
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://${datasource.url}?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true&autoReconnect=true&failOverReadOnly=false&maxReconnects=10
    username: root
    password: 123456
    hikari:
      maximum-pool-size: 10 # Maximum number of connection pools
      max-lifetime: 1770000

mybatis:
  # Specify the package of alias settings as all entities
  type-aliases-package: com.itcodai.course18.entity
  configuration:
    map-underscore-to-camel-case: true # Hump nomenclature
  mapper-locations: # mapper map file location
    - classpath:mapper/*.xml

After the configuration, let's write about the dao layer. In fact, we use a lot of annotations, because it's convenient. Of course, we can also use xml, or even both. Here we mainly use annotation to integrate. For xml, you can see the previous courses. In fact, it depends on the project situation.

public interface UserMapper {

    @Select("select * from user where id = #{id}")
    @Results({
            @Result(property = "username", column = "user_name"),
            @Result(property = "password", column = "password")
    })
    User getUser(Long id);

    @Select("select * from user where id = #{id} and user_name=#{name}")
    User getUserByIdAndName(@Param("id") Long id, @Param("name") String username);

    @Select("select * from user")
    List<User> getAll();
}

I will not write code in the article about the service layer. You can learn from my source code. In this section, I will lead you to build a Spring Boot empty architecture. Finally, don't forget to add annotation scan @ MapperScan("com.itcodai.course18.dao") to the startup class

5. interceptor

Interceptors are used very much (but not absolutely) in projects, such as intercepting some top URLs, making some judgments and processing, etc. In addition, common static pages or swagger pages need to be released, and these static resources cannot be blocked. First, customize an interceptor.

public class MyInterceptor implements HandlerInterceptor {

    private static final Logger logger = LoggerFactory.getLogger(MyInterceptor.class);

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

        logger.info("Execute before executing method(Controller Before method call)");
        return true;
    }

    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
        logger.info("Execute after method execution(Controller After method call),But the view has not been rendered yet");
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
        logger.info("The whole request has been processed, DispatcherServlet I also rendered the corresponding view. Now I can do some cleaning work");
    }
}

Then add the custom interceptor to the interceptor configuration.

@Configuration
public class MyInterceptorConfig implements WebMvcConfigurer {
    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        // Implementing WebMvcConfigurer will not cause static resources to be blocked
        registry.addInterceptor(new MyInterceptor())
                // Block all URLs
                .addPathPatterns("/**")
                // Release swagger
                .excludePathPatterns("/swagger-resources/**");
    }
}

In Spring Boot, we usually store some static resources in the following directory:

classpath:/static
classpath:/public
classpath:/resources
classpath:/META-INF/resources

The / * * configured in the above code intercepts all URLs, but we implement the WebMvcConfigurer interface, which will not cause Spring Boot to intercept the static resources in the above directories. But the swagger we usually visit will be blocked, so we need to release it. The swagger page is in the swagger resources directory, and you can release all the files in the directory.

Then enter the swagger page in the browser. If the swagger page can be displayed normally, the release is successful. At the same time, the code execution order can be determined according to the log printed in the background.

6. Global exception handling

Global exception handling is something that must be used in every project. In specific exceptions, we may do specific handling, but for those that are not handled, there is generally a unified global exception handling. Before exception handling, it is best to maintain an exception prompt information enumeration class, which is specially used to store exception prompt information. As follows:

public enum BusinessMsgEnum {
    /** Parameter exception */
    PARMETER_EXCEPTION("102", "Parameter exception!"),
    /** Waiting for timeout */
    SERVICE_TIME_OUT("103", "Service call timeout!"),
    /** Too big parameter */
    PARMETER_BIG_EXCEPTION("102", "The number of pictures entered cannot exceed 50!"),
    /** 500 : exception occurred */
    UNEXPECTED_EXCEPTION("500", "System exception, please contact administrator!");

    /**
     * Message code
     */
    private String code;
    /**
     * Message content
     */
    private String msg;

    private BusinessMsgEnum(String code, String msg) {
        this.code = code;
        this.msg = msg;
    }

    public String code() {
        return code;
    }

    public String msg() {
        return msg;
    }

}

In the global unified Exception handling class, we usually handle the customized business exceptions first, then handle some common system exceptions, and finally have an Exception exception once and for all.

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = LoggerFactory.getLogger(GlobalExceptionHandler.class);

    /**
     * Intercept business exceptions and return business exception information
     * @param ex
     * @return
     */
    @ExceptionHandler(BusinessErrorException.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleBusinessError(BusinessErrorException ex) {
        String code = ex.getCode();
        String message = ex.getMessage();
        return new JsonResult(code, message);
    }

    /**
     * Null pointer exception
     * @param ex NullPointerException
     * @return
     */
    @ExceptionHandler(NullPointerException.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleTypeMismatchException(NullPointerException ex) {
        logger.error("Null pointer exception,{}", ex.getMessage());
        return new JsonResult("500", "Null pointer is abnormal");
    }

    /**
     * Unexpected system exception
     * @param ex
     * @return
     */
    @ExceptionHandler(Exception.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleUnexpectedServer(Exception ex) {
        logger.error("System exception:", ex);
        return new JsonResult(BusinessMsgEnum.UNEXPECTED_EXCEPTION);
    }

}

Among them, BusinessErrorException is a custom business exception, which can be inherited from the RuntimeException. See my source code for details, and the code will not be pasted in the article.
There is a testException method in UserController to test the global exception. Open the page of swagger and call the interface. You can see the prompt message returned to the user: "system exception, please contact the administrator!"! ". Of course, in the actual situation, different information needs to be prompted according to different businesses.

7. summary

In this paper, we will quickly build a Spring Boot empty architecture that can be used in a project, mainly from the unified encapsulated data structure, adjustable interface, json processing, template engine use (reflected in the code), integration of the persistence layer, interceptor and exception handling of the whole office. Generally speaking, a Spring Boot project environment is almost the same, and then it is expanded according to the specific situation.

Course source code download address: Poke me downloading

ct * from user")
List getAll();
}

about service I will not write the code in the article. You can learn from my source code. This section mainly leads you to build a Spring Boot Empty architecture. Finally, don't forget to add annotation scanning to the startup class `@MapperScan("com.itcodai.course18.dao")`

## 5. interceptor

//Interceptors are used very much (but not absolutely) in projects, such as intercepting some top URLs, making some judgments and processing, etc. In addition, common static pages or swagger pages need to be released, and these static resources cannot be blocked. First, customize an interceptor.
```java
public class MyInterceptor implements HandlerInterceptor {

    private static final Logger logger = LoggerFactory.getLogger(MyInterceptor.class);

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

        logger.info("Execute before executing method(Controller Before method call)");
        return true;
    }

    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
        logger.info("Execute after method execution(Controller After method call),But the view has not been rendered yet");
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
        logger.info("The whole request has been processed, DispatcherServlet I also rendered the corresponding view. Now I can do some cleaning work");
    }
}

Then add the custom interceptor to the interceptor configuration.

@Configuration
public class MyInterceptorConfig implements WebMvcConfigurer {
    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        // Implementing WebMvcConfigurer will not cause static resources to be blocked
        registry.addInterceptor(new MyInterceptor())
                // Block all URLs
                .addPathPatterns("/**")
                // Release swagger
                .excludePathPatterns("/swagger-resources/**");
    }
}

In Spring Boot, we usually store some static resources in the following directory:

classpath:/static
classpath:/public
classpath:/resources
classpath:/META-INF/resources

The / * * configured in the above code intercepts all URLs, but we implement the WebMvcConfigurer interface, which will not cause Spring Boot to intercept the static resources in the above directories. But the swagger we usually visit will be blocked, so we need to release it. The swagger page is in the swagger resources directory, and you can release all the files in the directory.

Then enter the swagger page in the browser. If the swagger page can be displayed normally, the release is successful. At the same time, the code execution order can be determined according to the log printed in the background.

6. Global exception handling

Global exception handling is something that must be used in every project. In specific exceptions, we may do specific handling, but for those that are not handled, there is generally a unified global exception handling. Before exception handling, it is best to maintain an exception prompt information enumeration class, which is specially used to store exception prompt information. As follows:

public enum BusinessMsgEnum {
    /** Parameter exception */
    PARMETER_EXCEPTION("102", "Parameter exception!"),
    /** Waiting for timeout */
    SERVICE_TIME_OUT("103", "Service call timeout!"),
    /** Too big parameter */
    PARMETER_BIG_EXCEPTION("102", "The number of pictures entered cannot exceed 50!"),
    /** 500 : exception occurred */
    UNEXPECTED_EXCEPTION("500", "System exception, please contact administrator!");

    /**
     * Message code
     */
    private String code;
    /**
     * Message content
     */
    private String msg;

    private BusinessMsgEnum(String code, String msg) {
        this.code = code;
        this.msg = msg;
    }

    public String code() {
        return code;
    }

    public String msg() {
        return msg;
    }

}

In the global unified Exception handling class, we usually handle the customized business exceptions first, then handle some common system exceptions, and finally have an Exception exception once and for all.

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = LoggerFactory.getLogger(GlobalExceptionHandler.class);

    /**
     * Intercept business exceptions and return business exception information
     * @param ex
     * @return
     */
    @ExceptionHandler(BusinessErrorException.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleBusinessError(BusinessErrorException ex) {
        String code = ex.getCode();
        String message = ex.getMessage();
        return new JsonResult(code, message);
    }

    /**
     * Null pointer exception
     * @param ex NullPointerException
     * @return
     */
    @ExceptionHandler(NullPointerException.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleTypeMismatchException(NullPointerException ex) {
        logger.error("Null pointer exception,{}", ex.getMessage());
        return new JsonResult("500", "Null pointer is abnormal");
    }

    /**
     * Unexpected system exception
     * @param ex
     * @return
     */
    @ExceptionHandler(Exception.class)
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleUnexpectedServer(Exception ex) {
        logger.error("System exception:", ex);
        return new JsonResult(BusinessMsgEnum.UNEXPECTED_EXCEPTION);
    }

}

Among them, BusinessErrorException is a custom business exception, which can be inherited from the RuntimeException. See my source code for details, and the code will not be pasted in the article.
There is a testException method in UserController to test the global exception. Open the page of swagger and call the interface. You can see the prompt message returned to the user: "system exception, please contact the administrator!"! ". Of course, in the actual situation, different information needs to be prompted according to different businesses.

7. summary

In this paper, we will quickly build a Spring Boot empty architecture that can be used in a project, mainly from the unified encapsulated data structure, adjustable interface, json processing, template engine use (reflected in the code), integration of the persistence layer, interceptor and exception handling of the whole office. Generally speaking, a Spring Boot project environment is almost the same, and then it is expanded according to the specific situation.

Course source code download address: Poke me downloading

Published 3 original articles, praised 0, visited 84
Private letter follow

Tags: Spring JSON Redis Database

Posted on Sun, 16 Feb 2020 22:07:19 -0800 by thom2002