Mybaits Source Parsing - - The most detailed network, none of them: Level 1 Cache and Level 2 Cache Source Analysis

ORM frameworks like Mybatis and Hibernate encapsulate most of the operations of JDBC, greatly simplifying our operations on the database.

In a real project, we found that when we query the same statement twice in a transaction, we did not query the database the second time and returned the result directly. In this case, we can call it caching.

Mybatis Cache Level

Level 1 Cache

  • MyBatis's first-level query cache (also known as local cache) is a HashMap local cache based on the org.apache.ibatis.cache.impl.PerpetualCache class. Its domain is SqlSession, and myBatis's default first-level query cache is on and cannot be closed.
  • The same sql query statement is executed twice in the same SqlSession. After the first execution, the query results are written to the cache. The second time, the data is directly obtained from the cache instead of being queried in the database. This reduces the access to the database and improves the query efficiency.
  • The PerpetualCache-based HashMap local Cache has a storage scope of Session, and the PerpetualCache object is stored in the local Cache attribute of Executor in the SqlSession. When Session flush or close, all Caches in the Session are emptied.

Secondary Cache

  • The second level cache has the same mechanism as the first level cache. By default, it also uses PerpetualCache, HashMap storage, except that it has a Mapper(Namespace) storage scope. Each Mapper has a Cache object, which is stored in Configration, placed in all MappedStatement s of the current Mapper, and can customize the storage source, such as Ehcache.
  • Mapper level cache, defined in the <cache>tag of the Mapper file, and needs to be turned on

Use the diagram below to describe the relationship between a first-level cache and a second-level cache.

CacheKey

In MyBatis, caching is introduced to improve query efficiency and reduce database pressure.Now that MyBatis has introduced caching, have you thought about what the values of key and value are in the cache?It may be easy for you to answer the value, not the query result of SQL.What is the key?Is it a string or something else?If it's a string, the first thing you can think of is using an SQL statement as a key.But this is incorrect, for example:

SELECT * FROM user where id > ?

The results from ID > 1 and ID > 10 may be different, so we cannot simply use SQL statements as keys.You can see from this that runtime parameters affect query results, so our key should cover runtime parameters.In addition, paging queries can result in different results, so keys should also include paging parameters.To sum up, we cannot use simple SQL statements as keys.Consider using a composite object that covers factors that can affect the query results.In MyBatis, this composite object is a CacheKey.Let's look at its definition.

public class CacheKey implements Cloneable, Serializable {

    private static final int DEFAULT_MULTIPLYER = 37;
    private static final int DEFAULT_HASHCODE = 17;

    // Multiplier, default is 37
    private final int multiplier;
    // CacheKey Of hashCode,Combining various influencing factors
    private int hashcode;
    // Checksum
    private long checksum;
    // Number of influence factors
    private int count;
    // Set of influence factors
    private List<Object> updateList;
    
    public CacheKey() {
        this.hashcode = DEFAULT_HASHCODE;
        this.multiplier = DEFAULT_MULTIPLYER;
        this.count = 0;
        this.updateList = new ArrayList<Object>();
    }
    
    /** Whenever an update operation is performed, a new influence factor is involved in the calculation 
     *  hashcode and checksum become more complex and random as new influence factors continue to participate in the calculation.This reduces the conflict rate and allows CacheKey to be distributed more evenly in the cache.
     */
    public void update(Object object) {
            int baseHashCode = object == null ? 1 : ArrayUtil.hashCode(object);
        // Self-increasing count
        count++;
        // Check Summing
        checksum += baseHashCode;
        // To update baseHashCode
        baseHashCode *= count;

        // Calculation hashCode
        hashcode = multiplier * hashcode + baseHashCode;

        // Save Impact Factor
        updateList.add(object);
    }
    
    /**
     *  CacheKey Ultimately, HashMap is saved as a key, so it needs to override equals and hashCode methods
     */
    public boolean equals(Object object) {
        // Detect if it's the same object
        if (this == object) {
            return true;
        }
        // Testing object Is it CacheKey
        if (!(object instanceof CacheKey)) {
            return false;
        }
        final CacheKey cacheKey = (CacheKey) object;

        // Testing hashCode Is it equal
        if (hashcode != cacheKey.hashcode) {
            return false;
        }
        // Check if the checksum is the same
        if (checksum != cacheKey.checksum) {
            return false;
        }
        // Testing coutn Is it the same
        if (count != cacheKey.count) {
            return false;
        }

        // If the above tests pass, the following comparisons are made for each impact factor
        for (int i = 0; i < updateList.size(); i++) {
            Object thisObject = updateList.get(i);
            Object thatObject = cacheKey.updateList.get(i);
            if (!ArrayUtil.equals(thisObject, thatObject)) {
                return false;
            }
        }
        return true;
    }

    public int hashCode() {
        // Return hashcode variable
        return hashcode;
    }
}

hashCode and checksum become more complex and random as new influence factors continue to participate in the calculation.This reduces the conflict rate and allows CacheKey to be distributed more evenly in the cache.CacheKey ultimately saves as a key in HashMap, so it needs to override equals and hashCode methods.

Level 1 Cache Source Parsing

Test of Level 1 Cache

Same session query

public static void main(String[] args) {
    SqlSession session = sqlSessionFactory.openSession();
    try {
        Blog blog = (Blog)session.selectOne("queryById",1);
        Blog blog2 = (Blog)session.selectOne("queryById",1);
    } finally {
        session.close();
    }
}

Conclusion: There is only one DB query

Two session s are queried separately

public static void main(String[] args) {
    SqlSession session = sqlSessionFactory.openSession();
    SqlSession session1 = sqlSessionFactory.openSession();
    try {
        Blog blog = (Blog)session.selectOne("queryById",17);
        Blog blog2 = (Blog)session1.selectOne("queryById",17);
    } finally {
        session.close();
    }
}

Conclusion: Two DB queries were made

Same session, query again after update

public static void main(String[] args) {
    SqlSession session = sqlSessionFactory.openSession();
    try {
        Blog blog = (Blog)session.selectOne("queryById",17);
        blog.setName("llll");
        session.update("updateBlog",blog);
        
        Blog blog2 = (Blog)session.selectOne("queryById",17);
    } finally {
        session.close();
    }
}

Conclusion: Two DB queries were made

Summary: In the first level cache, under the same SqlSession, SQL with the same query statement will be cached, and the cache will be deleted after the add-delete operation.

Create Cache Object PerpetualCache

Let's review the process of creating a SqlSession

SqlSession session = sessionFactory.openSession();

public SqlSession openSession() {
    return this.openSessionFromDataSource(this.configuration.getDefaultExecutorType(), (TransactionIsolationLevel)null, false);
}

private SqlSession openSessionFromDataSource(ExecutorType execType, TransactionIsolationLevel level, boolean autoCommit) {
    Transaction tx = null;

    DefaultSqlSession var8;
    try {
        Environment environment = this.configuration.getEnvironment();
        TransactionFactory transactionFactory = this.getTransactionFactoryFromEnvironment(environment);
        tx = transactionFactory.newTransaction(environment.getDataSource(), level, autoCommit);
        //Establish SQL Actuator
        Executor executor = this.configuration.newExecutor(tx, execType);
        var8 = new DefaultSqlSession(this.configuration, executor, autoCommit);
    } catch (Exception var12) {
        this.closeTransaction(tx);
        throw ExceptionFactory.wrapException("Error opening session.  Cause: " + var12, var12);
    } finally {
        ErrorContext.instance().reset();
    }

    return var8;
}

public Executor newExecutor(Transaction transaction, ExecutorType executorType) {
    executorType = executorType == null ? this.defaultExecutorType : executorType;
    executorType = executorType == null ? ExecutorType.SIMPLE : executorType;
    Object executor;
    if (ExecutorType.BATCH == executorType) {
        executor = new BatchExecutor(this, transaction);
    } else if (ExecutorType.REUSE == executorType) {
        executor = new ReuseExecutor(this, transaction);
    } else {
        //Default Creation SimpleExecutor
        executor = new SimpleExecutor(this, transaction);
    }

    if (this.cacheEnabled) {
        //Opening the secondary cache will use CachingExecutor decorate SimpleExecutor
        executor = new CachingExecutor((Executor)executor);
    }

    Executor executor = (Executor)this.interceptorChain.pluginAll(executor);
    return executor;
}

public SimpleExecutor(Configuration configuration, Transaction transaction) {
    super(configuration, transaction);
}

protected BaseExecutor(Configuration configuration, Transaction transaction) {
    this.transaction = transaction;
    this.deferredLoads = new ConcurrentLinkedQueue();
    //Create a cache object, PerpetualCache Not thread safe
    //but SqlSession and Executor Objects are typically accessed by only one thread, and are destroyed immediately after access is complete.that is session.close();
    this.localCache = new PerpetualCache("LocalCache");
    this.localOutputParameterCache = new PerpetualCache("LocalOutputParameterCache");
    this.closed = false;
    this.configuration = configuration;
    this.wrapper = this;
}

I just posted the code and you can see my previous blog. We can see that there is a SimpleExecutor object in DefaultSqlSession, a PerpetualCache in the SimpleExecutor object, the first level cached data is stored in the PerpetualCache object, and the PerpetualCache is emptied when SqlSession closes

Level 1 cache implementation

Let's see how the query method in BaseExecutor implements first-level caching. The executor defaults to CachingExecutor

CachingExecutor

public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
    BoundSql boundSql = ms.getBoundSql(parameter);
    //utilize sql And execute the parameter to generate a key,If the same sql Different execution parameters will result in different key
    CacheKey key = createCacheKey(ms, parameter, rowBounds, boundSql);
    return query(ms, parameter, rowBounds, resultHandler, key, boundSql);
}

@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
    throws SQLException {
    // This is a secondary cached query, let's leave it alone
    Cache cache = ms.getCache();
    if (cache != null) {
        flushCacheIfRequired(ms);
        if (ms.isUseCache() && resultHandler == null) {
            ensureNoOutParams(ms, parameterObject, boundSql);
            @SuppressWarnings("unchecked")
            List<E> list = (List<E>) tcm.getObject(cache, key);
            if (list == null) {
                list = delegate.<E> query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
                tcm.putObject(cache, key, list); // issue #578 and #116
            }
            return list;
        }
    }
    
    // Come straight here
    // Implement as BaseExecutor.query()
    return delegate.<E> query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}

As mentioned above, MyBatis first calls the createCacheKey method to create a CacheKey before accessing a first-level cache.Let's look at the logic of the createCacheKey method:

public CacheKey createCacheKey(MappedStatement ms, Object parameterObject, RowBounds rowBounds, BoundSql boundSql) {
    if (closed) {
        throw new ExecutorException("Executor was closed.");
    }
    // Establish CacheKey object
    CacheKey cacheKey = new CacheKey();
    // take MappedStatement Of id Calculate as an influence factor
    cacheKey.update(ms.getId());
    // RowBounds For paging queries, two of its fields are calculated as impact factors
    cacheKey.update(rowBounds.getOffset());
    cacheKey.update(rowBounds.getLimit());
    // Obtain sql Statement and calculate
    cacheKey.update(boundSql.getSql());
    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
    TypeHandlerRegistry typeHandlerRegistry = ms.getConfiguration().getTypeHandlerRegistry();
    for (ParameterMapping parameterMapping : parameterMappings) {
        if (parameterMapping.getMode() != ParameterMode.OUT) {
            // Runtime parameters
            Object value;    
            // The current chunk of code is used to get SQL Placeholder in #Runtime parameters for {xxx}
            // Previous similar analysis, ignored here
            String propertyName = parameterMapping.getProperty();
            if (boundSql.hasAdditionalParameter(propertyName)) {
                value = boundSql.getAdditionalParameter(propertyName);
            } else if (parameterObject == null) {
                value = null;
            } else if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {
                value = parameterObject;
            } else {
                MetaObject metaObject = configuration.newMetaObject(parameterObject);
                value = metaObject.getValue(propertyName);
            }
            
            // Involve runtime parameters in calculations
            cacheKey.update(value);
        }
    }
    if (configuration.getEnvironment() != null) {
        // Obtain Environment id Traverse and get involved in the calculation
        cacheKey.update(configuration.getEnvironment().getId());
    }
    return cacheKey;
}

As mentioned above, there are many influential factors involved in the calculation of CacheKey.Examples include the id field for MappedStatement, SQL statements, paging parameters, runtime variables, and the id field for Environment.By involving these impact factors in the calculation, you can distinguish between different query requests very well.So we can simply think of CacheKey as the id of a query request.With a CacheKey, we can use it to read and write caches.

SimpleExecutor(BaseExecutor)

@SuppressWarnings("unchecked")
@Override
public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
    ErrorContext.instance().resource(ms.getResource()).activity("executing a query").object(ms.getId());
    if (closed) {
        throw new ExecutorException("Executor was closed.");
    }
    if (queryStack == 0 && ms.isFlushCacheRequired()) {
        clearLocalCache();
    }
    List<E> list;
    try {
        queryStack++;
        // Look here, start with localCache Get Correspondence in CacheKey Result value
        list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
        if (list != null) {
            handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
        } else {
            // If there is no value in the cache, it will be retrieved from DB Query in
            list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
        }
    } finally {
        queryStack--;
    }
    if (queryStack == 0) {
        for (DeferredLoad deferredLoad : deferredLoads) {
            deferredLoad.load();
        }
        deferredLoads.clear();
        if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) {
            clearLocalCache();
        }
    }
    return list;
}

BaseExecutor.queryFromDatabase()

Let's first look at the case where there are no values in this cache and see how the results of the query are placed in the cache

private <E> List<E> queryFromDatabase(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
    List<E> list;
    localCache.putObject(key, EXECUTION_PLACEHOLDER);
    try {
        // 1.Execute the query to get list
        list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);
    } finally {
        localCache.removeObject(key);
    }
    // 2.Place the results of the query into localCache Medium, key That's what we just wrapped CacheKey,value From DB Queried in list
    localCache.putObject(key, list);
    if (ms.getStatementType() == StatementType.CALLABLE) {
        localOutputParameterCache.putObject(key, parameter);
    }
    return list;
}

Let's look at localCache.putObject(key, list);

PerpetualCache

PerpetualCache is a cache class used by first-level caches and uses HashMap internally for caching.Its source code is as follows:

public class PerpetualCache implements Cache {

    private final String id;

    private Map<Object, Object> cache = new HashMap<Object, Object>();

    public PerpetualCache(String id) {
        this.id = id;
    }

    @Override
    public String getId() {
        return id;
    }

    @Override
    public int getSize() {
        return cache.size();
    }

    @Override
    public void putObject(Object key, Object value) {
        // Store key-value pairs to HashMap
        cache.put(key, value);
    }

    @Override
    public Object getObject(Object key) {
        // Find Cached Items
        return cache.get(key);
    }

    @Override
    public Object removeObject(Object key) {
        // Remove Cached Items
        return cache.remove(key);
    }

    @Override
    public void clear() {
        cache.clear();
    }
    
    // Omit some code
}

Summary: You can see that localCache is essentially a Map, key is our CacheKey, value is our result value, is it simple, just encapsulates a Map.

Clear Cache

SqlSession.update()

When we update, the following code executes

@Override
public int update(MappedStatement ms, Object parameter) throws SQLException {
    ErrorContext.instance().resource(ms.getResource()).activity("executing an update").object(ms.getId());
    if (closed) {
        throw new ExecutorException("Executor was closed.");
    }
    //Every execution update/insert/delete The first level cache is cleared when the statement is executed.
    clearLocalCache();
    // Then update
    return doUpdate(ms, parameter);
}
 
@Override
public void clearLocalCache() {
    if (!closed) {
        // Direct to Map empty
        localCache.clear();
        localOutputParameterCache.clear();
    }
}

session.close();

//DefaultSqlSession
public void close() {
    try {
        this.executor.close(this.isCommitOrRollbackRequired(false));
        this.closeCursors();
        this.dirty = false;
    } finally {
        ErrorContext.instance().reset();
    }

}

//BaseExecutor
public void close(boolean forceRollback) {
    try {
        try {
            this.rollback(forceRollback);
        } finally {
            if (this.transaction != null) {
                this.transaction.close();
            }

        }
    } catch (SQLException var11) {
        log.warn("Unexpected exception on closing transaction.  Cause: " + var11);
    } finally {
        this.transaction = null;
        this.deferredLoads = null;
        this.localCache = null;
        this.localOutputParameterCache = null;
        this.closed = true;
    }

}

public void rollback(boolean required) throws SQLException {
    if (!this.closed) {
        try {
            this.clearLocalCache();
            this.flushStatements(true);
        } finally {
            if (required) {
                this.transaction.rollback();
            }

        }
    }

}

public void clearLocalCache() {
    if (!this.closed) {
        // Direct to Map empty
        this.localCache.clear();
        this.localOutputParameterCache.clear();
    }
}

When SqlSession is turned off, the first level cache in SqlSession is also clear

summary

  1. Level 1 cache only shares data in the same SqlSession
  2. Caching is only valid if the same sql is executed on the same SqlSession object with the same parameters.
  3. If the update/insert/detete statement or session.close() is executed in SqlSession, the executor object in SqlSession empties the first level cache.

Secondary Cache Source Parsing

The secondary cache is built on top of the primary cache, and MyBatis first queries the secondary cache when it receives a query request.If the second level cache is not hit, query the first level cache.Unlike first-level caches, second-level caches and specific namespace bindings have a Cache in one Mapper, a Cache in the same Mapper is shared by MappedStatement, and first-level caches are bound to SqlSession.There is no concurrency problem in the first level cache Secondary caches can be shared among multiple namespaces. In this case, there will be concurrency problems, which means that multiple different SqlSessions will execute the same SQL statement at the same time with the same parameters. If the CacheKeys are the same, then multiple threads will concurrently access the same CacheKey value. Let's first look at the logic for accessing the second level cache.

Test of Level 2 Cache

Secondary caching requires <cache/>tags to be configured in Mapper.xml

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" 
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
 
<mapper namespace="mybatis.BlogMapper">
    <select id="queryById" parameterType="int" resultType="jdbc.Blog">
        select * from blog where id = #{id}
    </select>
    <update id="updateBlog" parameterType="jdbc.Blog">
        update Blog set name = #{name},url = #{url} where id=#{id}
    </update>
    <!-- open BlogMapper Secondary Cache -->
    <cache/>
</mapper>

Different session s do the same query

public static void main(String[] args) {
    SqlSession session = sqlSessionFactory.openSession();
    SqlSession session1 = sqlSessionFactory.openSession();
    try {
        Blog blog = (Blog)session.selectOne("queryById",17);
        Blog blog2 = (Blog)session1.selectOne("queryById",17);
    } finally {
        session.close();
    }
}

Conclusion: Execute two DB queries

After the first session query is complete, submit it manually and execute the second session query

public static void main(String[] args) {
    SqlSession session = sqlSessionFactory.openSession();
    SqlSession session1 = sqlSessionFactory.openSession();
    try {
        Blog blog = (Blog)session.selectOne("queryById",17);
        session.commit();
 
        Blog blog2 = (Blog)session1.selectOne("queryById",17);
    } finally {
        session.close();
    }
}

Conclusion: Execute a DB query

After the first session query completes, close it manually and execute the second session query

public static void main(String[] args) {
    SqlSession session = sqlSessionFactory.openSession();
    SqlSession session1 = sqlSessionFactory.openSession();
    try {
        Blog blog = (Blog)session.selectOne("queryById",17);
        session.close();
 
        Blog blog2 = (Blog)session1.selectOne("queryById",17);
    } finally {
        session.close();
    }
}

Conclusion: Execute a DB query

Summary: The secondary cache must be committed or closed before it can take effect

Resolution of Label <cache/>

Following the previous analysis of Mybatis, parsing blog.xml is primarily done with the XMLConfigBuilder.parse() method

 1 // XMLConfigBuilder.parse()
 2 public Configuration parse() {
 3     if (parsed) {
 4         throw new BuilderException("Each XMLConfigBuilder can only be used once.");
 5     }
 6     parsed = true;
 7     parseConfiguration(parser.evalNode("/configuration"));// Ad locum
 8     return configuration;
 9 }
10  
11 // parseConfiguration()
12 // Now that you are blog.xml Added in, then we'll look directly at the mappers Label parsing
13 private void parseConfiguration(XNode root) {
14     try {
15         Properties settings = settingsAsPropertiess(root.evalNode("settings"));
16         propertiesElement(root.evalNode("properties"));
17         loadCustomVfs(settings);
18         typeAliasesElement(root.evalNode("typeAliases"));
19         pluginElement(root.evalNode("plugins"));
20         objectFactoryElement(root.evalNode("objectFactory"));
21         objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
22         reflectionFactoryElement(root.evalNode("reflectionFactory"));
23         settingsElement(settings);
24         // read it after objectFactory and objectWrapperFactory issue #631
25         environmentsElement(root.evalNode("environments"));
26         databaseIdProviderElement(root.evalNode("databaseIdProvider"));
27         typeHandlerElement(root.evalNode("typeHandlers"));
28         // Here it is
29         mapperElement(root.evalNode("mappers"));
30     } catch (Exception e) {
31         throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
32     }
33 }
34 
35 
36 // mapperElement()
37 private void mapperElement(XNode parent) throws Exception {
38     if (parent != null) {
39         for (XNode child : parent.getChildren()) {
40             if ("package".equals(child.getName())) {
41                 String mapperPackage = child.getStringAttribute("name");
42                 configuration.addMappers(mapperPackage);
43             } else {
44                 String resource = child.getStringAttribute("resource");
45                 String url = child.getStringAttribute("url");
46                 String mapperClass = child.getStringAttribute("class");
47                 // As configured in our example, go directly to the if judge
48                 if (resource != null && url == null && mapperClass == null) {
49                     ErrorContext.instance().resource(resource);
50                     InputStream inputStream = Resources.getResourceAsStream(resource);
51                     XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, resource, configuration.getSqlFragments());
52                     // generate XMLMapperBuilder,And execute it parse Method
53                     mapperParser.parse();
54                 } else if (resource == null && url != null && mapperClass == null) {
55                     ErrorContext.instance().resource(url);
56                     InputStream inputStream = Resources.getUrlAsStream(url);
57                     XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, url, configuration.getSqlFragments());
58                     mapperParser.parse();
59                 } else if (resource == null && url == null && mapperClass != null) {
60                     Class<?> mapperInterface = Resources.classForName(mapperClass);
61                     configuration.addMapper(mapperInterface);
62                 } else {
63                     throw new BuilderException("A mapper element may only specify a url, resource or class, but not more than one.");
64                 }
65             }
66         }
67     }
68 }

Let's take a look at parsing Mapper.xml

// XMLMapperBuilder.parse()
public void parse() {
    if (!configuration.isResourceLoaded(resource)) {
        // analysis mapper attribute
        configurationElement(parser.evalNode("/mapper"));
        configuration.addLoadedResource(resource);
        bindMapperForNamespace();
    }
 
    parsePendingResultMaps();
    parsePendingChacheRefs();
    parsePendingStatements();
}
 
// configurationElement()
private void configurationElement(XNode context) {
    try {
        String namespace = context.getStringAttribute("namespace");
        if (namespace == null || namespace.equals("")) {
            throw new BuilderException("Mapper's namespace cannot be empty");
        }
        builderAssistant.setCurrentNamespace(namespace);
        cacheRefElement(context.evalNode("cache-ref"));
        // Ultimately I see here about cache Handling of attributes
        cacheElement(context.evalNode("cache"));
        parameterMapElement(context.evalNodes("/mapper/parameterMap"));
        resultMapElements(context.evalNodes("/mapper/resultMap"));
        sqlElement(context.evalNodes("/mapper/sql"));
        // The resulting Cache Package to corresponding MappedStatement
        buildStatementFromContext(context.evalNodes("select|insert|update|delete"));
    } catch (Exception e) {
        throw new BuilderException("Error parsing Mapper XML. Cause: " + e, e);
    }
}
 
// cacheElement()
private void cacheElement(XNode context) throws Exception {
    if (context != null) {
        //analysis<cache/>Labeled type Properties, here we can customize cache Implementation classes, such as redisCache,If not customized, use the same as the first level cache here PERPETUAL
        String type = context.getStringAttribute("type", "PERPETUAL");
        Class<? extends Cache> typeClass = typeAliasRegistry.resolveAlias(type);
        String eviction = context.getStringAttribute("eviction", "LRU");
        Class<? extends Cache> evictionClass = typeAliasRegistry.resolveAlias(eviction);
        Long flushInterval = context.getLongAttribute("flushInterval");
        Integer size = context.getIntAttribute("size");
        boolean readWrite = !context.getBooleanAttribute("readOnly", false);
        boolean blocking = context.getBooleanAttribute("blocking", false);
        Properties props = context.getChildrenAsProperties();
        // structure Cache object
        builderAssistant.useNewCache(typeClass, evictionClass, flushInterval, size, readWrite, blocking, props);
    }
}

Let's first see how to build a Cache object

MapperBuilderAssistant.useNewCache()

public Cache useNewCache(Class<? extends Cache> typeClass,
                         Class<? extends Cache> evictionClass,
                         Long flushInterval,
                         Integer size,
                         boolean readWrite,
                         boolean blocking,
                         Properties props) {
    // 1.generate Cache object
    Cache cache = new CacheBuilder(currentNamespace)
         //Here if we define<cache/>In type,Use custom Cache,Otherwise, use the same as the first level cache PerpetualCache
        .implementation(valueOrDefault(typeClass, PerpetualCache.class))
        .addDecorator(valueOrDefault(evictionClass, LruCache.class))
        .clearInterval(flushInterval)
        .size(size)
        .readWrite(readWrite)
        .blocking(blocking)
        .properties(props)
        .build();
    // 2.add to Configuration in
    configuration.addCache(cache);
    // 3.And will cache Assign to MapperBuilderAssistant.currentCache
    currentCache = cache;
    return cache;
}

We see that a Mapper.xml parses the <cache/> tag only once, that is, creates a Cache object only once, puts it in the configuration, and assigns the cache to MapperBuilderAssistant.currentCache

buildStatementFromContext(context.evalNodes("select|insert|update|delete"); wrap Cache in MappedStatement

// buildStatementFromContext()
private void buildStatementFromContext(List<XNode> list) {
    if (configuration.getDatabaseId() != null) {
        buildStatementFromContext(list, configuration.getDatabaseId());
    }
    buildStatementFromContext(list, null);
}
 
//buildStatementFromContext()
private void buildStatementFromContext(List<XNode> list, String requiredDatabaseId) {
    for (XNode context : list) {
        final XMLStatementBuilder statementParser = new XMLStatementBuilder(configuration, builderAssistant, context, requiredDatabaseId);
        try {
            // Each execution statement is converted into a MappedStatement
            statementParser.parseStatementNode();
        } catch (IncompleteElementException e) {
            configuration.addIncompleteStatement(statementParser);
        }
    }
}
 
// XMLStatementBuilder.parseStatementNode();
public void parseStatementNode() {
    String id = context.getStringAttribute("id");
    String databaseId = context.getStringAttribute("databaseId");
    ...
 
    Integer fetchSize = context.getIntAttribute("fetchSize");
    Integer timeout = context.getIntAttribute("timeout");
    String parameterMap = context.getStringAttribute("parameterMap");
    String parameterType = context.getStringAttribute("parameterType");
    Class<?> parameterTypeClass = resolveClass(parameterType);
    String resultMap = context.getStringAttribute("resultMap");
    String resultType = context.getStringAttribute("resultType");
    String lang = context.getStringAttribute("lang");
    LanguageDriver langDriver = getLanguageDriver(lang);
 
    ...
    // Establish MappedStatement object
    builderAssistant.addMappedStatement(id, sqlSource, statementType, sqlCommandType,
                                        fetchSize, timeout, parameterMap, parameterTypeClass, resultMap, resultTypeClass,
                                        resultSetTypeEnum, flushCache, useCache, resultOrdered, 
                                        keyGenerator, keyProperty, keyColumn, databaseId, langDriver, resultSets);
}
 
// builderAssistant.addMappedStatement()
public MappedStatement addMappedStatement(
    String id,
    ...) {
 
    if (unresolvedCacheRef) {
        throw new IncompleteElementException("Cache-ref not yet resolved");
    }
 
    id = applyCurrentNamespace(id, false);
    boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
    //Establish MappedStatement object
    MappedStatement.Builder statementBuilder = new MappedStatement.Builder(configuration, id, sqlSource, sqlCommandType)
        ...
        .flushCacheRequired(valueOrDefault(flushCache, !isSelect))
        .useCache(valueOrDefault(useCache, isSelect))
        .cache(currentCache);// The one generated earlier here Cache Encapsulate to MappedStatement
 
    ParameterMap statementParameterMap = getStatementParameterMap(parameterMap, parameterType, id);
    if (statementParameterMap != null) {
        statementBuilder.parameterMap(statementParameterMap);
    }
 
    MappedStatement statement = statementBuilder.build();
    configuration.addMappedStatement(statement);
    return statement;
}

We see that the Cache object created in Mapper is added to each MappedStatement object, that is, the cache attribute reference in all MappedStatements in the same Mapper is the same

That's all for the <cache/> tag parsing.

Query Source Analysis

CachingExecutor

// CachingExecutor
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
    BoundSql boundSql = ms.getBoundSql(parameterObject);
    // Establish CacheKey
    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}

public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
    throws SQLException {
    // from MappedStatement Get in Cache,Notice here Cache From MappedStatement Obtained in
    // That's what we parsed above Mapper in<cache/>Created in the label, which is saved in Configration in
    // Let's parse it above blog.xml Every MappedStatement All have one Cache Object, here it is
    Cache cache = ms.getCache();
    // If there is no configuration in the configuration file <cache>,be cache Is empty
    if (cache != null) {
        //Refresh if you need to refresh the cache: flushCache="true"
        flushCacheIfRequired(ms);
        if (ms.isUseCache() && resultHandler == null) {
            ensureNoOutParams(ms, boundSql);
            // Access Level 2 Cache
            List<E> list = (List<E>) tcm.getObject(cache, key);
            // Cache Miss
            if (list == null) {
                // If there is no value, then the query is executed. This query actually starts with a first-level cache query. If there is no first-level cache, the query is executed DB query
                list = delegate.<E>query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
                // Cache query results
                tcm.putObject(cache, key, list);
            }
            return list;
        }
    }
    return delegate.<E>query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}

If flushCache="true" is set, the cache will be refreshed for each query

<!-- Execute this statement to empty the cache -->
<select id="getAll" resultType="entity.TDemo" useCache="true" flushCache="true" >
    select * from t_demo
</select>

As above, note that the secondary cache is obtained from MappedStatement.Since MappedStatement exists in the global configuration and can be obtained by multiple CachingExecutor s, thread security issues can arise.In addition, if left unchecked, multiple transactions sharing a single cache instance can cause dirty reading problems.As for dirty reading, other classes are needed, that is, the type of tcm variable in the code above.Here's an analysis.

TransactionalCacheManager

/** Transaction Cache Manager */
public class TransactionalCacheManager {

    // Cache and TransactionalCache Mapping relationship table of
    private final Map<Cache, TransactionalCache> transactionalCaches = new HashMap<Cache, TransactionalCache>();

    public void clear(Cache cache) {
        // Obtain TransactionalCache Object and call its clear Method, same as below
        getTransactionalCache(cache).clear();
    }

    public Object getObject(Cache cache, CacheKey key) {
        // Directly from TransactionalCache Get Cache in
        return getTransactionalCache(cache).getObject(key);
    }

    public void putObject(Cache cache, CacheKey key, Object value) {
        // Direct deposit TransactionalCache Cached in
        getTransactionalCache(cache).putObject(key, value);
    }

    public void commit() {
        for (TransactionalCache txCache : transactionalCaches.values()) {
            txCache.commit();
        }
    }

    public void rollback() {
        for (TransactionalCache txCache : transactionalCaches.values()) {
            txCache.rollback();
        }
    }

    private TransactionalCache getTransactionalCache(Cache cache) {
        // Get from Map Table TransactionalCache
        TransactionalCache txCache = transactionalCaches.get(cache);
        if (txCache == null) {
            // TransactionalCache Is also a decorative class, for Cache Add Transaction Function
            // Create a new TransactionalCache,And will be true Cache Save Objects in
            txCache = new TransactionalCache(cache);
            transactionalCaches.put(cache, txCache);
        }
        return txCache;
    }
}

Internally, TransactionalCache Manager maintains the mapping relationship between Cache instances and TransactionalCache instances. This class is only responsible for maintaining the mapping relationship between Cache instances and TransactionalCache instances.TransactionalCache is a cache decorator that adds transactional functionality to Cache instances.The dirty reading problems I mentioned earlier are handled by this class.The logic of this class is analyzed below.

TransactionalCache

public class TransactionalCache implements Cache {
    //True cached object, and above Map<Cache, TransactionalCache>In Cache Is the same
    private final Cache delegate;
    private boolean clearOnCommit;
    // All results queried from the database will be cached in this collection before the transaction is committed
    private final Map<Object, Object> entriesToAddOnCommit;
    // When a cache misses before a transaction is committed, CacheKey Will be stored in this collection
    private final Set<Object> entriesMissedInCache;


    @Override
    public Object getObject(Object key) {
        // Queries are made directly from delegate To query, that is, to query from a real cached object
        Object object = delegate.getObject(key);
        if (object == null) {
            // If the cache is not hit, the key Save in entriesMissedInCache in
            entriesMissedInCache.add(key);
        }

        if (clearOnCommit) {
            return null;
        } else {
            return object;
        }
    }

    @Override
    public void putObject(Object key, Object object) {
        // Save key-value pairs to entriesToAddOnCommit this Map Medium, not real cached objects delegate in
        entriesToAddOnCommit.put(key, object);
    }

    @Override
    public Object removeObject(Object key) {
        return null;
    }

    @Override
    public void clear() {
        clearOnCommit = true;
        // empty entriesToAddOnCommit,But not empty delegate cache
        entriesToAddOnCommit.clear();
    }

    public void commit() {
        // according to clearOnCommit The value of delegate
        if (clearOnCommit) {
            delegate.clear();
        }
        
        // Refresh uncached results to delegate In Cache
        flushPendingEntries();
        // Reset entriesToAddOnCommit and entriesMissedInCache
        reset();
    }

    public void rollback() {
        unlockMissedEntries();
        reset();
    }

    private void reset() {
        clearOnCommit = false;
        // Empty Collection
        entriesToAddOnCommit.clear();
        entriesMissedInCache.clear();
    }

    private void flushPendingEntries() {
        for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
            // take entriesToAddOnCommit Transfer content from to delegate in
            delegate.putObject(entry.getKey(), entry.getValue());
        }
        for (Object entry : entriesMissedInCache) {
            if (!entriesToAddOnCommit.containsKey(entry)) {
                // Save Null Value
                delegate.putObject(entry, null);
            }
        }
    }

    private void unlockMissedEntries() {
        for (Object entry : entriesMissedInCache) {
            try {
                // call removeObject Unlock
                delegate.removeObject(entry);
            } catch (Exception e) {
                log.warn("...");
            }
        }
    }

}

The secondary cache object is stored in the TransactionalCache.entries ToAddOnCommit map, but each query is queried directly from TransactionalCache.delegate, so setting the cache value does not take effect immediately after the secondary cache queries the database, mainly because direct storage to delegate can cause dirty data problems.

Why does the secondary cache not take effect until the SqlSession is committed or closed?

Let's see what the SqlSession.commit() method does.

SqlSession

@Override
public void commit(boolean force) {
    try {
        // Mainly this sentence
        executor.commit(isCommitOrRollbackRequired(force));
        dirty = false;
    } catch (Exception e) {
        throw ExceptionFactory.wrapException("Error committing transaction.  Cause: " + e, e);
    } finally {
        ErrorContext.instance().reset();
    }
}
 
// CachingExecutor.commit()
@Override
public void commit(boolean required) throws SQLException {
    delegate.commit(required);
    tcm.commit();// Ad locum
}
 
// TransactionalCacheManager.commit()
public void commit() {
    for (TransactionalCache txCache : transactionalCaches.values()) {
        txCache.commit();// Ad locum
    }
}
 
// TransactionalCache.commit()
public void commit() {
    if (clearOnCommit) {
        delegate.clear();
    }
    flushPendingEntries();//This sentence
    reset();
}
 
// TransactionalCache.flushPendingEntries()
private void flushPendingEntries() {
    for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
        // Here the real will be entriesToAddOnCommit Objects are added one by one delegate Only then does the secondary cache really take effect
        delegate.putObject(entry.getKey(), entry.getValue());
    }
    for (Object entry : entriesMissedInCache) {
        if (!entriesToAddOnCommit.containsKey(entry)) {
            delegate.putObject(entry, null);
        }
    }
}

Dirty data problems can occur if data queried from the database is stored directly in delegate.The following illustration demonstrates how a dirty data problem occurs. Assuming two threads open two different transactions, they execute as follows:

As shown above, transaction A updates record A at time 2.At moment 3, transaction A queries record A from the database and writes record A to the cache.At moment 4, transaction B queries record A, and since record A exists in the cache, transaction B fetches data directly from the cache.At this point, the problem with dirty data occurs.Transaction B read the record modified by transaction A when transaction A was not committed.To solve this problem, we can introduce a separate cache for each transaction.When querying data, it is still queried from the delegate cache (collectively referred to as the shared cache below).If the cache is not hit, query the database.When storing query results, they are not stored directly in the shared cache, but first in the transaction cache, which is the entriesToAddOnCommit collection.When a transaction commits, the cache entries from the transaction cache are then dumped into the shared cache.In this way, transaction B can only read the changes made by transaction A after transaction A is committed, which resolves the dirty reading problem.

Refresh of secondary cache

Let's take a look at the update operation for SqlSession

public int update(String statement, Object parameter) {
    int var4;
    try {
        this.dirty = true;
        MappedStatement ms = this.configuration.getMappedStatement(statement);
        var4 = this.executor.update(ms, this.wrapCollection(parameter));
    } catch (Exception var8) {
        throw ExceptionFactory.wrapException("Error updating database.  Cause: " + var8, var8);
    } finally {
        ErrorContext.instance().reset();
    }

    return var4;
}

public int update(MappedStatement ms, Object parameterObject) throws SQLException {
    this.flushCacheIfRequired(ms);
    return this.delegate.update(ms, parameterObject);
}

private void flushCacheIfRequired(MappedStatement ms) {
    //Obtain MappedStatement Corresponding Cache,Perform emptying
    Cache cache = ms.getCache();
    //SQL Requires setup flushCache="true" Will empty
    if (cache != null && ms.isFlushCacheRequired()) {
  this.tcm.clear(cache);
    }
}

MyBatis secondary caches are only available for data that is not frequently added, deleted, or altered, such as street data for provinces, urban areas, and national administrative regions.Once the data changes, MyBatis empties the cache.Therefore, a secondary cache is not suitable for data that is frequently updated.

Storing secondary caches using redis

From the code analysis above, we know that both the secondary cache default and the primary cache use PerpetualCache to store the results. As long as the SQL Session is closed, the primary cache will be emptied and implemented internally using HashMap, so the secondary cache cannot be distributed and there will be no cache after the server restarts.This requires the introduction of third-party caching middleware to store cached values externally, such as redis and ehcache

Modify the configuration in mapper.xml.

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="com.tyb.saas.common.dal.dao.AreaDefaultMapper">
 
    <!--
    flushInterval: Unit milliseconds that can be set to any positive integer.
        By default, there is no refresh interval, and the cache refreshes only when the statement is invoked.
    size (number of references): Can be set to any positive integer, remember the number of objects you cache and the number of memory resources available in your running environment.The default value is 1024.
    readOnly: The property can be set to true or false.A read-only cache returns the same instance of the cached object to all callers.
        Therefore, these objects cannot be modified.This provides important performance advantages.A read-write cache returns a copy of the cached object (by serialization).This is slower, but safe, so it defaults to false.
    eviction: The default is LRU:
        1.LRU - least recently used: Remove objects that have not been used for the longest time.
        2.FIFO - FIFO: Remove objects in the order they enter the cache.
        3.SOFT - Soft Reference: Remove objects based on garbage collector status and soft reference rules.
        4.WEAK - Weak References: More actively remove objects based on garbage collector status and weak reference rules.
    Blocking: Default is false, encapsulated with Blocking Cache when true, blocking means blocking.
        Using BlockingCache locks the corresponding Key when querying the cache. If the cache hits, the corresponding lock will be released. Otherwise, the lock will be released after querying the database.
        This prevents multiple threads from querying data concurrently, referring to the source code of BlockingCache for details.
    type: Cache class that can be specified for use. mybatis uses HashMap for caching by default, where third-party middleware is referenced for caching
    -->
    <cache type="org.mybatis.caches.redis.RedisCache" blocking="false"
           flushInterval="0" readOnly="true" size="1024" eviction="FIFO"/>
 
    <!--
        useCache: Cache is used by default true
    -->
    <select id="find" parameterType="map" resultType="com.chenhao.model.User" useCache="true">
        SELECT * FROM user
    </select>
 
</mapper>

Still simple, RedisCache uses Java serialization and deserialization to save and retrieve cached data, so you need to ensure that the cached object implements the Serializable interface.

You can also implement the cache yourself

Implement your own cache

package com.chenhao.mybatis.cache;

import org.apache.ibatis.cache.Cache;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.ValueOperations;

import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

/**
 * @author chenhao
 * @date 2019/10/31.
 */
public class RedisCache implements Cache {

    private final String id;

    private static ValueOperations<String, Object> valueOs;

    private static RedisTemplate<String, String> template;


    public static void setValueOs(ValueOperations<String, Object> valueOs) {
        RedisCache.valueOs = valueOs;
    }

    public static void setTemplate(RedisTemplate<String, String> template) {
        RedisCache.template = template;
    }

    private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();


    public RedisCache(String id) {
        if (id == null) {
            throw new IllegalArgumentException("Cache instances require an ID");
        }
        this.id = id;
    }

    @Override
    public String getId() {
        return this.id;
    }

    @Override
    public void putObject(Object key, Object value) {
        valueOs.set(key.toString(), value, 10, TimeUnit.MINUTES);
    }

    @Override
    public Object getObject(Object key) {
        return valueOs.get(key.toString());
    }

    @Override
    public Object removeObject(Object key) {
        valueOs.set(key.toString(), "", 0, TimeUnit.MINUTES);
        return key;
    }

    @Override
    public void clear() {
        template.getConnectionFactory().getConnection().flushDb();
    }

    @Override
    public int getSize() {
        return template.getConnectionFactory().getConnection().dbSize().intValue();
    }

    @Override
    public ReadWriteLock getReadWriteLock() {
        return this.readWriteLock;
    }
}

Configure your own Cache implementation in Mapper

<cache type="com.chenhao.mybatis.cache.RedisCache"/> 

Tags: Java Session Mybatis SQL Database

Posted on Fri, 08 Nov 2019 20:45:49 -0800 by oscar2