Cache Inner Work Method: Cache Foundation Arrangement

Cache avalanche

Cache avalanche is due to the failure (expiration) of the original cache and the non-expiration of the new cache. All requests are to query the database, which results in huge pressure on the CPU and memory of the database, and serious database downtime. Thus a series of chain reactions are formed, resulting in the collapse of the whole system.

Solution:

  1. When concurrency is not particularly large, the most frequently used solution is locking queuing.

  2. Add a corresponding cache tag to each cache data to record whether the cache is invalid or not, and update the data cache if the cache tag is invalid.

    • Cache tag: Records whether the cache data is expired, and if it expires, triggers a notification to another thread to update the cache of the actual key in the background.
    • Cached data: Its expiration time is twice as long as that of cached tags. For example, the cached time of tags is 30 minutes and the data cache is set to 60 minutes. In this way, when the cache marker key expires, the actual cache can also return the old data to the caller, and the new cache will not be returned until the other threads have finished updating in the background.

Locking queuing scheme pseudocode:

//Pseudo-code
public object GetProductListNew() {
    int cacheTime = 30;
    String cacheKey = "product_list";
    String lockKey = cacheKey;

    String cacheValue = CacheHelper.get(cacheKey);
    if (cacheValue != null) {
        return cacheValue;
    } else {
        synchronized(lockKey) {
            cacheValue = CacheHelper.get(cacheKey);
            if (cacheValue != null) {
                return cacheValue;
            } else {
                //This is usually the case here. sql Query data
                cacheValue = GetProductListFromDB();
                CacheHelper.Add(cacheKey, cacheValue, cacheTime);
            }
        }
        return cacheValue;
    }
}

 

Cache markup scheme pseudocode:

//Pseudo-code
public object GetProductListNew() {
    int cacheTime = 30;
    String cacheKey = "product_list";
    //Cache markup
    String cacheSign = cacheKey + "_sign";

    String sign = CacheHelper.Get(cacheSign);
    //Get the cached value
    String cacheValue = CacheHelper.Get(cacheKey);
    if (sign != null) {
        return cacheValue; //Not expired, return directly
    } else {
        CacheHelper.Add(cacheSign, "1", cacheTime);
        ThreadPool.QueueUserWorkItem((arg) -> {
            //This is usually the case here. sql Query data
            cacheValue = GetProductListFromDB();
            //Date set to double the cache time for dirty reading
            CacheHelper.Add(cacheKey, cacheValue, cacheTime * 2);
        });
        return cacheValue;
    }
}

 

Cache Penetration

Cache penetration refers to the user querying data, which is not in the database, and naturally not in the cache. This results in the user query, can not be found in the cache, each time to go to the database to query again, and then return empty (equivalent to two useless queries). This allows requests to bypass the cache and look directly at the database, which is also a frequent cache hit rate issue.

Solution:

  1. Bloom filter, which hashes all the existing data into a large enough bitmap, intercepts a data that must not exist, thus avoiding the query pressure on the underlying storage system.
  2. If the data returned by a query is empty (whether the data does not exist or the system fails), we still cache the empty result, but its expiration time will be very short, up to five minutes. It is the simplest and rudest way to store the default value directly into the cache, so that the second time to get the value in the buffer, instead of continuing to access the database.

Scheme 2 Pseudo Code:

//Pseudo-code
public object GetProductListNew() {
    int cacheTime = 30;
    String cacheKey = "product_list";

    String cacheValue = CacheHelper.Get(cacheKey);
    if (cacheValue != null) {
        return cacheValue;
    }

    cacheValue = CacheHelper.Get(cacheKey);
    if (cacheValue != null) {
        return cacheValue;
    } else {
        //The database can not be queried. It is empty.
        cacheValue = GetProductListFromDB();
        if (cacheValue == null) {
            //If it is found to be empty, set a default value and cache it as well.
            cacheValue = string.Empty;
        }
        CacheHelper.Add(cacheKey, cacheValue, cacheTime);
        return cacheValue;
    }
}

 

Cache preheating

Cache preheating should be a common concept. I believe that many small partners should be able to understand it easily. Cache preheating is to load the relevant cached data directly into the cached system after the system is online. This can avoid the problem of querying the database first and then caching the data when the user requests it! Users directly query the pre-heated cached data!

Solutions:

  1. Write a cache refresh page directly and operate it manually when online.
  2. The amount of data is small, and it can be loaded automatically when the project starts.
  3. Refresh the cache regularly;

Cache updates

In addition to the caching invalidation strategy of the caching server, we can also customize the caching elimination according to the specific business needs. There are two common strategies:

  1. Clean up expired caches at regular intervals.
  2. When a user requests to come, and then determine whether the cache used for this request expires, then go to the underlying system to get new data and update the cache.

Cache degradation

When the volume of access increases dramatically, service problems (such as slow response time or non-response) or non-core services affect the performance of core processes, it is still necessary to ensure that services are still available, even if services are compromised. The system can automatically degrade according to some key data, and can also configure switches to achieve manual degrade.

The ultimate goal of downgrading is to ensure that core services are available, even if they are harmful. And some services can't be degraded.

Before downgrading, it is necessary to sort out the system to see if the system can throw out the guard; so as to sort out which must be sworn to death protection, which can be downgraded; for example, you can refer to the log level setting plan:

(1) General: For example, some services can be degraded automatically if they are out of time occasionally due to network jitter or when the service is on line;

(2) Warning: Some services have fluctuations in success rate over a period of time (e.g. between 95% and 100%) and can be automatically or manually downgraded and alerted;

(3) Errors: For example, the availability rate is less than 90%, or the database connection pool is exploded, or the number of visits suddenly increases to the maximum threshold that the system can withstand. At this time, it can automatically or manually degrade according to the situation.

(4) Serious errors: For example, data errors due to special reasons require urgent manual demotion.

Recommended Reading

JVM's latest interview questions in 2019 must be collected

The most comprehensive Ali multi-threaded interview questions, how many can you answer?

Java Interview Question: Collections in Java and Their Inheritance Relations

It took nearly a decade to sort out the most comprehensive Java interview questions in history.

Tags: Java Database SQL network

Posted on Fri, 30 Aug 2019 06:49:48 -0700 by Squirrel86