redis Of key Configured expiration time , How was this deleted
redis The data is clearly overdue , How can I still use memory ?
Redis You can only use 10G, If you write it in 20G The data of , What's going to happen ? What data to eliminate
redis key Expiration strategy
Delete periodically + Lazy deletion .
Redis How to eliminate expired ones keys: set name xx 3600
Delete periodically :
After a while , Just randomly select some key, Check if it is overdue , Delete if expired ,
Regular deletion can cause many expiration key The time has not been deleted , So how to deal with it , So it's inert deletion
Lazy deletion :
Concept : When some clients try to access it ,key Will be found and actively expired
Let the key expire , But every time you get a key from the key space , Check whether the obtained key is expired , If it's overdue , Delete the key
Redis The server actually uses two strategies: lazy deletion and periodic deletion : By using these two deletion strategies together , The server can be well used CPU Strike a balance between time and avoiding wasting memory space .
problem
If regular deletion misses a lot of expiration key, And then you didn't check in time , Also did not walk inert deletion , What will happen at this time ?
If a large number of expired key Pile up in memory , Lead to redis The memory block is exhausted , You need to go through the memory elimination mechanism
Design cache Middleware : You can refer to redis Of key Expiration elimination method and insufficient memory elimination method
When there is not enough memory -Redis Of Key Memory retirement strategy
background
redis The occupied memory exceeds the specified maxmemory after ,
Through the maxmemory_policy To make sure redis Whether to free memory and how to free memory
Provide a variety of strategies
Strategy
volatile-lru(least recently used)( The most commonly used )
Least recently used algorithm , Select the key value pair with the longest idle time from the keys with the expiration time set to clear ;
volatile-lfu(least frequently used)
The least frequently used algorithm recently , Select the key value pair with the lowest frequency in a certain period of time from the keys with the expiration time set, and clear it ;
volatile-ttl ( Delete expiring )
Select the key value pair with the earliest expiration time from the keys with the expiration time set to clear ( Delete expiring )
volatile-random ( Randomly delete the expiring )
From the key with the expiration time set , Select the key randomly to clear ;
allkeys-lru
Least recently used algorithm , Select the key value pair with the longest idle time from all keys to clear ;
allkeys-lfu
The least frequently used algorithm recently , Select the key value pair with the least frequent use in a certain period of time from all keys to clear ;
allkeys-random
Of all the keys , Randomly select the key to delete ;
noeviction
Don't do any cleaning work , stay redis After the memory exceeds the limit , All write operations will return errors ; But the read operation can proceed normally ;
config When it comes to configuration Underline _ Of key Need to use the middle horizontal line -
127.0.0.1:6379> config set maxmemory_policy volatile-lru
(error) ERR Unsupported CONFIG parameter: maxmemory_policy
127.0.0.1:6379> config set maxmemory-policy volatile-lru
OK
be based on java Realization lru Algorithm
publicclassLRUCache<K,V>extendsLinkedHashMap<K,V>{
privateint capacity;/** * How much data can be cached when passed in * * @param capacity Cache size */publicLRUCache(int capacity){
super(capacity,0.75f,true);this.capacity = capacity;}/** * If map The amount of data in is greater than the set maximum capacity , return true, Delete the oldest data when adding new objects * * @param eldest The oldest data item * @return true Then remove the oldest data */@OverrideprotectedbooleanremoveEldestEntry(Map.Entry<K,V> eldest){
// When map When the amount of data in is greater than the specified number of caches , Automatically remove the oldest data returnsize()> capacity;}}
Redis6.x Persistent configuration introduction and RDB Explain
Redis Introduction to persistence
Redis It's a memory database , If persistence is not configured ,redis After restart, all data will be lost
So it turns on redis Persistence of , Save data to disk , When redis After restart , Data can be recovered from disk .
Two ways to persist
RDB (Redis DataBase)
AOF (append only file)
RDB Introduction to persistence
Write the data set snapshot in memory to disk within the specified time interval
The default filename is dump.rdb
Snapshot generation
save
Will block the current Redis The server , perform save During the order ,Redis Can't handle other orders , until RDB Until the process is complete
bgsave
fork Create child process ,RDB The persistence process is the responsibility of the subprocess , The snapshot operation will be performed asynchronously in the background , The snapshot can also respond to client requests
automation
Profile to complete , Configure trigger Redis Of RDB Persistence conditions
such as “save m n”. Express m Data set exists in seconds n At the time of revision , Automatic triggering bgsave
Master slave architecture
When synchronizing data from the server , Will send sync Perform synchronous operation ,master The master server will execute bgsave
advantage
RDB The files are compact , Full volume backup , Suitable for backup and disaster recovery
Speed ratio when recovering large data sets AOF It's faster to recover
What is generated is a compact compressed binary file
shortcoming
Each snapshot is a full backup ,fork The child process performs background operations , Subprocesses have overhead
Data modified during snapshot persistence will not be saved , Possible loss of data
rdb stay redis When you restart, it will reload dump.rdb To recover data
Redis6.x Persistent configuration AOF Introduction and configuration practice
AOF Introduction to persistence
append only file, How to append files , The document is easy to read
Each write is logged as a separate log , Reexecute on reboot AOF The command in the file restores the data
Write process down , It doesn't affect the previous data , Can pass redis-check-aof Check and fix the problem
Configure actual combat
appendonly yes, Not on by default
AOF file name adopt appendfilename Configuration settings , The default filename is appendonly.aof
The storage path is the same as RDB Consistent persistence , Use dir To configure
The core principle
Redis Each write command is appended to aof_buf( buffer )
AOF The buffer synchronizes the hard disk according to the corresponding policy
high frequency AOF It will have an impact , Especially every time you brush the plate
Provides 3 There are two ways to synchronize , Balance performance and security
appendfsync always
Write every time a data change occurs AOF file , High consumption performance
appendfsync everysec
Sync every second , The strategy is AOF Default policy for .
appendfsync no
No master-slave synchronization , The operating system automatically schedules disk swiping , Performance is the best , But the least safe
redis Load on restart appendonly.aof
Redis6.x Persistent configuration AOF and RDB The choice of
Redis Different persistence options are available :
RDB Persistence executes a point in time snapshot of a dataset at a specified time interval .
AOF Persistent records every write operation received by the server , Will be read again when the server starts , Rebuild the original dataset . Use with Redis The protocol itself records commands in the same format as append only , When the file is too large ,Redis Be able to rewrite
Supplement the previous configuration
auto-aof-rewrite-min-size
AOF File minimum rewrite size , Only when AOF Only when the file size is larger than this value can it be rewritten ,6.x The default configuration 64mb.
auto-aof-rewrite-percentage
At present AOF The ratio between the file size and the last rewritten size is equal to or equal to the specified growth percentage , Such as 100 On behalf of the current AOF The file was rewritten twice as long as it was last rewritten .
RDB Advantages and disadvantages
advantage :
RDB It maximizes Redis Performance of , The parent process does not need to participate in disk I/O
RDB The files are compact , Full volume backup , Suitable for backup and disaster recovery
Speed ratio when recovering large data sets AOF It's faster to recover
What is generated is a compact compressed binary file
shortcoming :
If you need to be in Redis When you stop working ( For example, after power failure ) Minimize the possibility of data loss , be RDB Not good
RDB Regular needs fork To use subprocesses to persist on disk . If the data set is large ,Fork It can be very time consuming
AOF Advantages and disadvantages
advantage :
Data is more secure
When Redis AOF When the file is too large ,Redis Can be automatically rewritten in the background AOF
AOF In a format that is easy to understand and parse , One by one, a log of all operations
shortcoming :
AOF Files are usually more equivalent than the same data set RDB The file is big
According to the exact fsync Strategy , When you recover AOF Maybe it's better than RDB slow
What should we do online ?
RDB Persistence and AOF Use with persistence
If Redis The data in is not particularly sensitive or can be rewritten in other ways to generate data
The cluster can be shut down AOF Persistence , Ensure availability by cluster backup
Make your own strategy and check it regularly Redis The situation of , You can then trigger the backup manually 、 Rewrite data ;
Adopt cluster and master-slave synchronization
Redis4.0 Started after rewrite Support Mixed mode
Namely rdb and aof Together with
Direct will rdb Persistent operation to overwrite binary content to aof In file ,rdb It's binary , So it's very small
If there is writing, continue append Append to file original command , The next time the file is too large rewrite
The default is on
benefits
Mixed persistence combines RDB Persistence and AOF The advantages of persistence , To take the rdb Small files for disaster recovery
At the same time combined with AOF, Incremental data to AOF The way to save , Less data loss
Disadvantage
The first part is RDB Format , It's binary , So reading is poor
Data recovery
Let's see if there is aof file , If it exists, first follow aof File recovery ,aof Than rdb whole , And aof File also rewrite become rdb Binary format
if aof non-existent , Will find rdb Whether there is
copyright:author[You and other students],Please bring the original link to reprint, thank you.
https://en.javamana.com/2022/02/202202130808514089.html