Cgb2005 jt-13 (AOP reviews AOP's implementation of redis cache custom annotation, RDB / AOF persistence strategy, LRU / LFU algorithm, cache breakdown avalanche, redis fragmentation, consistency hash algorithm)

Cool breeze AAA 2022-02-13 07:43:27 阅读数:792

cgb2005 cgb jt-13 jt aop

matters needing attention

1.AOP Realization Redis Caching services
aop review , Introductory cases , Realization redis cache ( Select category ) Custom annotation The leaf category implements caching
2. About redis Configuration description of
Persistence strategy :RDB AOF
Redis Memory optimization :LRU LFU Random TTL
3.Redis Cache interview questions Cache penetration breakdown An avalanche
4.Redis Fragmentation
demand Introductory cases Uniformity hash Algorithm

1. AOP Realization Redis Caching services

1.1 Analysis of existing code

explain :
1). Although at the business level service The implementation of the code is completed in . But the code is not reusable . If you change to another business, you need to edit it again .
2). Because the cached code is written in the business layer service in , So the code coupling is high , Inconvenient for future expansion .( Such as : In the future, use other software instead of redis Do the cache , You need to change the code of the business layer , If many of the business layers do caching , So it's too troublesome to modify one by one .)
demand :
1. Whether code reuse can be realized .
2. Can you reduce the coupling of code .

1.2 AOP

1.2.1 AOP effect

name : Section oriented programming .
One sentence summary : Reduce code coupling , Without changing the original code , Extend the function .
The formula : AOP = Pointcut expression + Notification method .

Professional term :
1. Connection point : The pointcut in the execution of the business expression satisfies the normal pointcut .( Weaving ) Multiple
2. notice : The specific business executed in the aspect ( Expansion of original business ) Method
3. The breakthrough point : Can enter the section ( One class ) A judgment of if Judge One
4. The target method : The real business logic to be executed .
give an example : Through specific conditions ( Pointcut expression ) Thundering across to the daughter country ( section : class ), Then the specific thing that the daughter country does is to inform the method . The entry point is just a judgment condition , The real connection is the connection point .
 Insert picture description here

1.2.2 About notice instructions

1. Pre notice @Before: Before the target method is executed
2. The rear notice @AfterReturning: The target method is executed after execution
3. Abnormal notice @AfterThrowing: Execute when an exception is thrown after the target method is executed
4. Final notice @After: Methods to be implemented at any time .
explain : The four types of notification mentioned above There is no control over whether the target method is executed ( None of them have the function of interception , Whether it is executed before or after the target method, the target method will be executed after completion ), Generally, the above four notification types are used , Are used to record programs Execution status .

5. Surrounding the notification @Around: A notification method that is executed before and after the target method is executed . Control whether the target method is implemented . And the function of surrounding notification is the most powerful .

Surrounding the notification , If call 了 .joinPoint.proceed() Method , Then execute the target method
 Insert picture description here
If There is no call proceed() Method , Will turn around and return .
 Insert picture description here

1.2.3 Pointcut expression description

effect : When the program is running , When the pointcut expression is satisfied, the notification method is executed , Realize business expansion .
understand : Pointcut expression is a judgment of whether a program enters the notification (if)
species ( How to write it ):

  1. bean(bean The name of bean Of ID: Class object name ) Can only intercept a specific bean object The principle of coarse-grained matching Only one object can be matched
    eg: bean("itemServiceImpl”)
  2. within( Package name . Class name ) Can match multiple objects The principle of coarse-grained matching Match by class
    eg:within(“com.jt.service.*”)
  3. execution( return type Package name . Class name . Method name ( parameter list )) Fine grained matching principle The most powerful use
    eg : execution(* com.jt.service..*.*(..))
    ..* Represents all under this package and all under sub packages ..* Represents scanning the first level directory under the current package
    The return value type is arbitrary com.jt.service All classes and methods under the package will be intercepted .
  4. @annotation( Package name . Annotated name ) Fine grained matching principle Match according to the annotation .

1.2.4 Explain the connection point

explain : The method that satisfies the pointcut expression is called the join point .
explain : For example, when a method executes normally, it needs to call the target method , In the process of execution, meet the entry point and enter the aspect , Then the normal execution method is called connection point .
 Insert picture description here

1.2.5 AOPDemo review (common in )

package com.jt.aop;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.stereotype.Component;
import java.util.Arrays;
@Aspect // I am a AOP Section class 
@Component // Give the class to spring Container management 
public class CacheAOP {

// The formula = Pointcut expression + Notification method 
/** * About the use of pointcut expressions * coarse-grained : * 1.bean(bean Of Id) One class * 2.within( Package name . Class name ) Multiple classes * fine-grained */
@Pointcut("bean(itemCatServiceImpl)")
//@Pointcut("within(com.jt.service..*)") // Match multi level directory 
//@Pointcut("within(com.jt.service.*)") // Match the first level directory 
//@Pointcut("execution(* com.jt.service..*.*(..))") // Method parameter level 
public void pointCut(){

// Define pointcut expression Just for space 
}
/** * Define pre notice , Binding to pointcut expressions . Notice that the binding is the method * difference : @Before("pointCut()") Represents a reference to a pointcut expression For multiple notifications Shared pointcuts * @Before("bean(itemCatServiceImpl)") For a single notification . No need to reuse . That is, a single can be combined and written together . * * demand : Get information about the target object . * 1. Get the path of the target method Package name . Class name . Method name * 2. Gets the type of the target method class * 3. Get the parameters passed * 4. Record the current execution time * joinPoint: Connection point object */
@Before("pointCut()")
//@Before("bean(itemCatServiceImpl)")
public void before(JoinPoint joinPoint){

String className = joinPoint.getSignature().getDeclaringTypeName();// Get the target method path 
String methodName = joinPoint.getSignature().getName();// Get the target method name 
Class targetClass = joinPoint.getTarget().getClass();// Get target object type 
Object[] args = joinPoint.getArgs();// Get the parameters passed 
Long runTime = System.currentTimeMillis();
System.out.println(" I'm advance notice ");
System.out.println(" Method path :" +className+"."+methodName);
System.out.println(" Target object type :" + targetClass);
System.out.println(" Parameters :" + Arrays.toString(args));
System.out.println(" execution time :" + runTime+" millisecond ");
}
/* @AfterReturning("pointCut()") public void afterReturn(){ System.out.println(" I'm a post notification "); } @After("pointCut()") public void after(){ System.out.println(" I'm the final notice "); }*/
/** * The circular notification States * matters needing attention : * 1. Parameters must be added to the surround notification ProceedingJoinPoint( It is JoinPoint Subclasses of ) * 2.ProceedingJoinPoint Can only be used around notification * 3.ProceedingJoinPoint If it's a parameter Must be in the first place of the parameter */
@Around("pointCut()")
public Object around(ProceedingJoinPoint joinPoint){

System.out.println(" Surround notification begins !!!");
Object result = null;// Receive the result of target method execution 
try {

// The execution of the target method may report an error , Exception needs to be handled 
result = joinPoint.proceed(); // Execute the next notification or target method 
} catch (Throwable throwable) {

throwable.printStackTrace();
}
System.out.println(" End of surround notification ");
return result;
}
}

1.2.6 test

function : Execute pre notification first , When executing the target method .
 Insert picture description here

 Insert picture description here

1.3 Realization Redis cache ( Select category )

1.3.1 Demand analysis

problem : How to control Which methods need caching ? cacheFind()
Solution : In the form of custom annotations Define , If Method execution requires caching , Then mark the annotation .
Notes on notes :
1. Annotated name : cacheFind
2. Property parameters :
2.1 key: It should be added manually by the user himself Generally add business name After that, dynamic splicing forms the only key
2.2 seconds: The user can specify the timeout period of the data

Description of attribute parameters :
key The prefix should be entered by the customer , It's necessary to tell the background what kind of process the customer performs , Dynamic splicing according to the prefix entered by the customer key.
The timeout time of data should also be specified by the customer .

1.3.2 Custom annotation (common)

It may be used by others in the future, so put it in common in
 Insert picture description here

package com.jt.anno;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
// Yuan notes : Annotated annotation 
@Retention(RetentionPolicy.RUNTIME) // When is the annotation valid The operation period is valid 
@Target({
ElementType.METHOD}) // Effective for the method Add one {} It means that you can write more than one , Only one can not write {}, The array in the annotation is used {} Express .
public @interface CacheFind {

// Note that the modifier in the interface thinks by default public
public String preKey(); // This attribute must be added User ID key The prefix of .
// Be careful not to use packing type 
public int seconds() default 0; // Set the timeout The default is 0 No timeout If the user does not write, it means that there is no need to time out , If it is written, the user shall prevail .
}

1.3.3 Using annotations

1). First, restore the code of the control layer .
 Insert picture description here
2). Use annotations... At the business level
 Insert picture description here

1.3.4 edit CacheAOP

Because they are written in one class , So put the test Aop Of Demo Method commented out .

package com.jt.aop;
import com.jt.anno.CacheFind;
import com.jt.util.ObjectMapperUtil;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import redis.clients.jedis.Jedis;
import java.util.Arrays;
@Aspect // I am a AOP Section class 
@Component // Give the class to spring Container management 
public class CacheAOP {

@Autowired
private Jedis jedis;
/** * section = The breakthrough point + Notification method * Annotations associated + Surrounding the notification Control whether the target method is implemented (redis If there is data in the, there is no need to execute the target method , No data will execute the target method ) * The target method : Query the database * difficulty : * 1. How to get annotation objects * 2. Dynamic generation key prekey + User parameter array ( Pass what, receive what ) * 3. How to get the return value type of a method * */
/** Interpretation difficulties 1:@Around("@annotation(cacheFind)") * Originally, this place was called package name . Annotation name :@Around("@annotation(com.jt.anno.CacheFind)"), * But write this to get the attribute of this annotation , You need to get the class through reflection first Method Annotation objects are acquired and compared step by step * trouble . * Optimize : Pass it directly into the method as a parameter (spring Provided ). */
//@Around("@annotation(com.jt.anno.CacheFind)")
//public Object around(ProceedingJoinPoint joinPoint){

@Around("@annotation(cacheFind)")
public Object around(ProceedingJoinPoint joinPoint, CacheFind cacheFind){

Object result = null;
try {

//1. Splicing redis Storing data key
Object[] args = joinPoint.getArgs();// Get the parameters of the target method , It is possible to pass more than one parameter Object
String key = cacheFind.preKey() +"::" + Arrays.toString(args);
//2. Inquire about redis Then judge whether there is data 
if(jedis.exists(key)){

//redis There's a record in , There is no need to execute the target method 
String json = jedis.get(key);
/** * take json Convert to object and return : * If the object type converted in this place is written directly :object.class Can only transform some simple * data ( Array aggregate object ), For some nested data eg:Map<k,map<k,v>> These one level or even multi-level nested data * Abnormal return during conversion . * solve : Get the return value type of the method directly , What is directly transformed into what object * Get the return value type of the method dynamically ( It can also be reflected , Through here Aop Provided api Get... By signing ) * Look up : The parent class reference points to the subclass object * Shape down : Subclass references point to parent objects ( The parent class needs to be strongly assigned to the child class ,) * getReturnType() by joinPoint Subclasses of MethodSignature Proprietary approach , A strong turn is needed . */
MethodSignature methodSignature = (MethodSignature) joinPoint.getSignature();// Note that the guide bag is reflective, not spring My bag 
Class returnType = methodSignature.getReturnType();// Get the return value type according to the signature 
result = ObjectMapperUtil.toObject(json, returnType);// Do not write Object.class
System.out.println("AOP Inquire about redis cache ");
}else{

// Indicates that the data does not exist , Need to query database 
result = joinPoint.proceed(); // Implementation target method and notification ( The target method : Query the database )
// Save the query results to redis In the middle 
String json = ObjectMapperUtil.toJSON(result);
// Determine whether the data needs a timeout 
if(cacheFind.seconds()>0){
 // Get... By object seconds attribute 
jedis.setex(key,cacheFind.seconds(),json);// Greater than 0 Storage time 
}else {

jedis.set(key, json);
}
System.out.println("aop Execute target method query database ");
}
} catch (Throwable throwable) {

throwable.printStackTrace();
}
return result;
}
/* AOP Entry case testing : The following notes are omitted // The formula = Pointcut expression + Notification method *//**
* About the use of pointcut expressions
* coarse-grained :
* 1.bean(bean Of Id) One class
* 2.within( Package name . Class name ) Multiple classes
* fine-grained
*//*
@Pointcut("bean(itemCatServiceImpl)")
//@Pointcut("within(com.jt.service..*)") // Match multi level directory 
//@Pointcut("within(com.jt.service.*)") // Match the first level directory 
//@Pointcut("execution(* com.jt.service..*.*(..))") // Method parameter level 
public void pointCut(){

// Define pointcut expression Just for space 
}
*//**
* Define pre notice , Binding to pointcut expressions . Notice that the binding is the method
* difference : @Before("pointCut()") Represents a reference to a pointcut expression For multiple notifications Shared pointcuts
* @Before("bean(itemCatServiceImpl)") For a single notification . No need to reuse . That is, a single can be combined and written together .
*
* demand : Get information about the target object .
* 1. Get the path of the target method Package name . Class name . Method name
* 2. Gets the type of the target method class
* 3. Get the parameters passed
* 4. Record the current execution time
* joinPoint: Connection point object
*//*
@Before("pointCut()")
//@Before("bean(itemCatServiceImpl)")
public void before(JoinPoint joinPoint){

String className = joinPoint.getSignature().getDeclaringTypeName();// Get the target method path 
String methodName = joinPoint.getSignature().getName();// Get the target method name 
Class targetClass = joinPoint.getTarget().getClass();// Get target object type 
Object[] args = joinPoint.getArgs();// Get the parameters passed 
Long runTime = System.currentTimeMillis();
System.out.println(" I'm advance notice ");
System.out.println(" Method path :" +className+"."+methodName);
System.out.println(" Target object type :" + targetClass);
System.out.println(" Parameters :" + Arrays.toString(args));
System.out.println(" execution time :" + runTime+" millisecond ");
}
*//* @AfterReturning("pointCut()")
public void afterReturn(){

System.out.println(" I'm a post notification ");
}
@After("pointCut()")
public void after(){

System.out.println(" I'm the final notice ");
}*//*
*//**
* The circular notification States
* matters needing attention :
* 1. Parameters must be added to the surround notification ProceedingJoinPoint( It is JoinPoint Subclasses of )
* 2.ProceedingJoinPoint Can only be used around notification
* 3.ProceedingJoinPoint If it's a parameter Must be in the first place of the parameter
*//*
@Around("pointCut()")
public Object around(ProceedingJoinPoint joinPoint){

System.out.println(" Surround notification begins !!!");
Object result = null;// Receive the result of target method execution 
try {

// The execution of the target method may report an error , Exception needs to be handled 
result = joinPoint.proceed(); // Execute the next notification or target method 
} catch (Throwable throwable) {

throwable.printStackTrace();
}
System.out.println(" End of surround notification ");
return result;
}*/
}

Access test :
 Insert picture description here
 Insert picture description here

1.3.5 Description of surround notification parameters

Question 1 : The connection point must be in the first place of the notified parameter . standard
 Insert picture description here
Otherwise, the error information is as follows :
 Insert picture description here
Question two : The other four inform whether types can be added ProceedingJoinPoint object
answer : ProceedingJoinPoint Can only be added to surround notifications .
An error is as follows :
 Insert picture description here

1.4 Commodity list classification realizes cache processing ( The leaves of category )

It can be used directly , All written in common It can be used in general .
 Insert picture description here

/** * Analysis business : adopt itemCatId Get the name of the product category * 1.url Address : url:"/item/cat/queryItemName", * 2. Parameters : {itemCatId:val}, * 3. Return value : Category name String */
@RequestMapping("/queryItemName")
@CacheFind(preKey="ITEM_CAT_NAME")// This annotation is written in the control layer , On behalf of the user, as long as the query, the data will be stored in the cache , Faster than writing in the business layer .
public String findItemCatName(Long itemCatId){

return itemCatService.findItemCatNameById(itemCatId);
}

test :
 Insert picture description here
 Insert picture description here

2. About redis Configuration description of

2.1 Redis Persistence strategy description

2.1.1 Persistence requirements description

problem :Redis The data is stored in memory , If the memory is powered off, the data will be lost . In order to ensure that the user's memory data is not lost , You need to turn on the persistence mechanism .

explain :redis Data persistence is supported by default , When redis When there is data in, the data will be saved to disk regularly . When Redis When the server restarts , The specified persistence file will be read according to the configuration file , Realize the recovery of memory data .

What is persistence : Save the data in memory to disk regularly .

2.1.2 Redis Introduction to persistence in

explain :Redis Persistence methods in mainly include 2 Kind of .
The way 1: RDB Pattern dump.rdb( Persistent files ) Default persistence method
The way 2: AOF Pattern appendonly.aof The default shutdown requires manual startup .

2.1.3 RDB Pattern

characteristic :
1. RDB The pattern is Redis Default persistence policy in .
2. RDB Patterns can achieve regular persistence , But it may cause data loss .
3. RDB Mode is a snapshot of memory data , also The snapshot taken after will overwrite the previous snapshot , So the persistent file is smaller , Recover data faster , Work more efficiently .

command :
The user can request... Through the command redis Do persistent operations .( The premise is to enter redis client )
1). save It's a synchronous operation , requirement redis Perform persistence immediately . Users may fall into a blocking state .
2). bgsave Is asynchronous operation , Start a separate thread to perform persistence operation . Persistence will not affect the use of users . There is no guarantee of immediate execution .
Such as :
 Insert picture description here

To configure :
1). Get into redis Configuration file for .
 Insert picture description here
2). According to the line Numbers :set nu
 Insert picture description here
3). The default persistence file is called :dump.rdb Can be renamed .
:/rdb You can find keywords in the configuration file .
 Insert picture description here
4). Location of persistent files .
dir ./ The relative path , Represents the current working directory .
dir /usr/local/src/redis Absolute path
 Insert picture description here
5). The default rule .
explain :
900 Within seconds if 1 Secondary update , So persistence 1 Time
300 Within seconds 10 Secondary update , Persistence 1 Time .
60 Within seconds 10000 Secondary update , Persistence 1 Time .

problem :save 1 1, Can you persist once a second ???
answer : no way , because save It's blocked , Persisting the operation once will greatly reduce the access speed, and the performance is too low .
 Insert picture description here

2.1.4 AOF Pattern ( It's an addition )

characteristic :

  1. AOF Mode is off by default , It needs to be turned on manually .
  2. AOF The mode is asynchronous operation , Records of the Is the process of user operation , It can ensure that the user's data is not lost as much as possible .
    Explain the process of user operation : That is, the persistent file records the operation steps, such as :set aa , set b b These steps .
  3. because AOP Records the state of the persistent file , Therefore, persistent files occupy a relatively large space , Data recovery is slow and inefficient , Need to optimize persistence files artificially (eg: The more data stored, the more cards , hold 5 individual g The log file of is optimized to 100M So it's faster ).

To configure :
1 . Turn on AOF To configure ( Also go to redis.conf In profile )
Fast search keyword command :/appendonly
Be careful : If open AOF Pattern , At this time Rdb and AOF Patterns coexist , At this time, use AOF The mode is the main .
 Insert picture description here

2 . After configuration, it needs to be shut down first and take effect after restart :

 Insert picture description here

3 .AOF Pattern persistence strategy (728 That's ok )
 Insert picture description here

 appendfsync always If the user executes once set The operation is persisted once
appendfsync everysec Persistent once per second
appendfsync no Do not actively persist .

2.1.5 summary : When to use RDB/AOF

1. If the user can allow a small amount of data loss, you can choose RDB Pattern ( fast ).
2. If the user does not allow data loss, select AOF Pattern .
3. If you want to ensure both efficiency and data , You should configure redis The cluster of , Generally, the host is turned on RDB Pattern , Slave on AOF Pattern , It can ensure the validity of data .

2.2 About Redis Description of memory optimization

2.2.1 The background that

explain :Redis Data is stored in memory , If you keep storing data into memory, it will inevitably lead to memory data overflow .
Solution :

  1. Keep as much as possible in redis Data addition timeout in .
  2. Using algorithms to optimize old data .

2.2.2 LRU Algorithm

LRU yes Least Recently Used Abbreviation , Least recently used , Is a common page replacement algorithm , Select the most recent unused page to weed out . The algorithm gives each page an access field , Used to record the time of a page since it was last visited t, When a page has to be eliminated , Select one of the existing pages t The most valuable , That is, the most recently used pages are eliminated .
dimension : Time T
LRU Algorithm is the best algorithm to realize memory cleaning at present .
 Insert picture description here

2.2.3 LFU Algorithm

LFU(least frequently used (LFU) page-replacement algorithm). namely The least frequently used page replacement algorithm , It is required to replace the page with the lowest reference count at page replacement , Because frequently used pages should have a large number of references . But some pages are used a lot at the beginning , But it will not be used in the future , Such pages will stay in memory for a long time , So you can put The reference count register is shifted one bit to the right at a fixed time , The average number of uses to form an exponential decay .
dimension : Number of citations

2.2.4 Random Algorithm

Randomly delete data .

2.2.5 TTL Algorithm

explain : Monitor the remaining survival time , Delete the data with less survival time in advance .

2.2.6 Redis Memory optimization strategy

 Insert picture description here

 1.volatile-lru In the data with the timeout set , use lru Algorithm .
2.allkeys-lru All the data are based on lru Algorithm
3.volatile-lfu Use... In timeout data lfu Algorithm
4.allkeys-lfu -> All the data are based on lfu Algorithm
5.volatile-random -> The data of setting time-out time adopts random algorithm
6.allkeys-random -> All data is deleted randomly
7.volatile-ttl -> Delete data with less survival time
8.noeviction -> Data will not be deleted , If the memory overflows and an error is reported, it returns .

 Insert picture description here

3. About Redis Cache interview questions

Problem description : Because of the massive user requests If this time redis Server problem It may cause the whole system to crash .
Running speed :

  1. tomcat The server 150-250 Between JVM tuning 1000/ second
  2. NGINX 3-5 ten thousand / second
  3. REDIS read 11.2 ten thousand / second Write 8.6 ten thousand / second Average 10 ten thousand / second

3.1 Cache penetration

Problem description : Because of users High concurrency environment visit Data that does not exist in the database , Easy to cause cache penetration .
How to solve : Set up IP Current limiting operation , stay nginx Chinese or Microsoft service mechanism through API Gateway implementation .

3.2 Cache breakdown

Problem description : Because of users High concurrency environment , Because some data existed in memory before , But for special reasons ( Data timeout / Data accidentally deleted ) Lead to redis Cache invalidation . So that a large number of users' requests directly access the database .
Common saying : Take advantage of his illness To kill him
How to solve :
1. When setting the timeout , Don't set the same time .
2. Set multi level cache .
 Insert picture description here

3.3 Cache avalanche

explain : because Under the condition of high concurrency , There is a lot of data failure . Lead to redis The hit rate is too low . It allows users to access the database directly ( The server ) Cause crash , It's called cache avalanche .
Solution :
1. Don't set the same timeout + random number
2. Set multi level cache .
3. Improve redis Cache hit rate adjustment redis Memory optimization strategy use LRU And so on .
 Insert picture description here

4. Redis Fragmentation mechanism

4.1 Why fragmentation is needed

explain :
If you need to redis Massive data storage in , Only one redis Obviously, this function cannot be realized . And by blindly expanding redis The way of memory can't meet the requirements , Because time is wasted in addressing ( namely : The larger the memory, the slower it is to find data ).

problem : How to effectively store massive amounts of data ???
answer : Fragmentation mechanism .

4.2 Add : How to increase redis Of memory

 Insert picture description here
 Insert picture description here

4.3 Redis It's divided into sections

explain : Generally, multiple sets are used redis, Save the user's data separately , So as to realize the expansion of memory data .
For users : take redis Slice as a whole , Users don't care where the data is stored , Just care if you can save .

summary :
1). The main function of segmentation : Realize memory expansion .
2). Three stations redis There are different data .
 Insert picture description here

4.4 Redis Build in pieces

4.4.1 Construction precautions

explain : because Redis Startup is run according to the configuration file , So if you need to prepare 3 platform redis, You need to prepare 3 Configuration files redis.conf.
The port numbers are :6379/6380/6381

4.4.2 Steps to build

1). Original redis Shut down
 Insert picture description here
2). Create a partition directory for easy management :mkdir shards
 Insert picture description here
3). Copy 3 A configuration file to the partition Directory
 Insert picture description here
4). Modify the port number of the configuration file :6380 6381
 Insert picture description here
 Insert picture description here
5). start-up redis:

 redis-server 6379.conf
redis-server 6380.conf
redis-server 6381.conf

 Insert picture description here
6). Verify that the server is running properly ps -ef |grep redis
 Insert picture description here
7). explain : Now? 3 individual redis The name of the persistent file is the same , When starting, the persistence operation will be written to the same file . If you restart 3 platform redis Because the persistent files are the same , Will read the same file , And that leads to 3 platform redis The node data is the same . It needs to be modified under real conditions 3 A persistent file name , Because the last thing I learned was caching , Learning in this place is just a transition , So change less and save time , As long as you don't restart .
 Insert picture description here

4.4.3 Notes on slicing

1. Problem description
When you start multiple redis After the server , More than one redis There is no necessary connection for the time being , Each is a separate entity . Data can be stored independently ( That is to say, every one redis The server can store the same data ). As shown in the figure :

2. If the partition is operated in the way of program , To put 3 platform redis As a whole , So it's totally different from the above operation , There won't be a key Save to more than one at the same time redis The phenomenon of .
 Insert picture description here

4.4.4 Redis Introduction case of segmentation

 Insert picture description here

 /** * test Redis Fragmentation mechanism * Business ideas : * Users need to pass API To operate 3 platform redis. Users don't need to care about how data is stored , * Just know whether the data can be stored . * reflection : 2005 Where is your data stored redis in ( Developers need to know ) * redis How does fragmentation realize data storage ! */
@Test
public void testShards(){

// hold 3 platform redis Save nodes to list aggregate 
List<JedisShardInfo> list = new ArrayList<>();
list.add(new JedisShardInfo("192.168.126.129", 6379));//ip+redis Port number 
list.add(new JedisShardInfo("192.168.126.129", 6380));
list.add(new JedisShardInfo("192.168.126.129", 6381));
// Ready to slice objects 
ShardedJedis shardedJedis = new ShardedJedis(list);
shardedJedis.set("2005", "redis Slice learning ");
System.out.println(shardedJedis.get("2005"));
}

reflection : Where is the data stored redis The specific one in the slice redis What about the server ?
Through the background view, the data is stored in 6381 Inside , So how is this controlled ? Through consistency hash Algorithm .

Be careful : Slicing uses consistency hash Algorithm .
 Insert picture description here

4.5 Uniformity hash Algorithm

4.5.1 Algorithm is introduced

The consistency hash algorithm 1997 Proposed by MIT in , It's a special hash algorithm , The purpose is to solve the problem of distributed cache . [1] When removing or adding a server , Be able to change the mapping relationship between existing service requests and processing request servers as little as possible . Consistent hashing solves the problem of simple hashing algorithms in distributed hashes ( Distributed Hash Table,DHT) Existing in Dynamic scaling and other problems [2] .

4.5.2 Common sense Introduction

common sense 1: General hash yes 8 position 16 Hexadecimal number . How many possibilities are there : 0-9 A-F (24)8 = 2^32
common sense 2: If you do the same data hash operation The result must be the same .
common sense 3: A data 1M And data 1G Of hash The speed of operation is the same .

4.5.3 Uniformity hash explain

step :
1. First calculate node node ( adopt ip+port Calculation 3 platform redis Of hash node )
2. The user's key Conduct hash Calculation , Then find the nearest one in a clockwise direction node Link after node , perform set operation .
 Insert picture description here

4.5.4 Characteristic 1 Balance

Concept : Balance means hash The results should be distributed equally among the nodes , In this way, the load balancing problem is solved from the algorithm [4] .( Roughly average )
Problem description : It's all due to the nodes hash How to calculate . So it may appear as shown in the figure ., It leads to serious load imbalance
 Insert picture description here
resolvent : Introduce virtual nodes
 Insert picture description here

4.5.5 Characteristic 2 monotonicity

Monotonicity refers to adding or deleting nodes , It does not affect the normal operation of the system Because automatic data migration can be realized ..
principle : During data migration The original data should be changed as little as possible .
 Insert picture description here

4.5.6 Characteristic 3 Dispersion

The proverb, : Don't put eggs in a basket .
Decentralization refers to the fact that data should be stored in distributed cluster nodes ( The node itself can have backup ), It's not necessary for every node to store all the data

copyright:author[Cool breeze AAA],Please bring the original link to reprint, thank you. https://en.javamana.com/2022/02/202202130743201545.html