4. 解析配置文件






4.2 Units 单位

  1 # Redis configuration file example.
   2 #
   3 # Note that in order to read the configuration file, Redis must be
   4 # started with the file path as first argument:
   5 #
   6 # ./redis-server /path/to/redis.conf
   8 # Note on units: when memory size is needed, it is possible to specify
   9 # it in the usual form of 1k 5GB 4M and so forth:
  10 #
  11 # 1k => 1000 bytes
  12 # 1kb => 1024 bytes
  13 # 1m => 1000000 bytes
  14 # 1mb => 1024*1024 bytes
  15 # 1g => 1000000000 bytes
  16 # 1gb => 1024*1024*1024 bytes
  17 #
  18 # units are case insensitive so 1GB 1Gb 1gB are all the same.




4.3 Includes包含

  22 # Include one or more other config files here.  This is useful if you
  23 # have a standard template that goes to all Redis servers but also need
  24 # to customize a few per-server settings.  Include files can include
  25 # other files, so use this wisely.
  26 #
  27 # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
  28 # from admin or Redis Sentinel. Since Redis always uses the last processed
  29 # line as value of a configuration directive, you'd better put includes
  30 # at the beginning of this file to avoid overwriting config change at runtime.
  31 #
  32 # If instead you are interested in using includes to override configuration
  33 # options, it is better to use include as the last line.
  34 #
  35 # include /path/to/local.conf
  36 # include /path/to/other.conf



  48 # By default, if no "bind" configuration directive is specified, Redis listens
  49 # for connections from all the network interfaces available on the server.
  50 # It is possible to listen to just one or multiple selected interfaces using
  51 # the "bind" configuration directive, followed by one or more IP addresses.
  52 #
  53 # Examples:
  54 #
  55 # bind
  56 # bind ::1
  57 #
  58 # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
  59 # internet, binding to all the interfaces is dangerous and will expose the
  60 # instance to everybody on the internet. So by default we uncomment the
  61 # following bind directive, that will force Redis to listen only into
  62 # the IPv4 loopback interface address (this means Redis will be able to
  63 # accept connections only from clients running into the same computer it
  64 # is running).
  65 #
  68 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  69 bind


bind ::1





# Protected mode is a layer of security protection, in order to avoid that
  72 # Redis instances left open on the internet are accessed and exploited.
  73 #
  74 # When protected mode is on and if:
  75 #
  76 # 1) The server is not binding explicitly to a set of addresses using the
  77 #    "bind" directive.
  78 # 2) No password is configured.
  79 #
  80 # The server only accepts connections from clients connecting from the
  81 # IPv4 and IPv6 loopback addresses and ::1, and from Unix domain
  82 # sockets.
  83 #
  84 # By default protected mode is enabled. You should disable it only if
  85 # you are sure you want clients from other hosts to connect to Redis
  86 # even if no authentication is configured, nor a specific set of interfaces
  87 # are explicitly listed using the "bind" directive.
  88 protected-mode yes



  89 # Accept connections on the specified port, default is 6379 (IANA   90 #815344).
  91 # If port 0 is specified Redis will not listen on a TCP socket.
  92 port 6379

接受指定端口上的连接,默认值为6379 (IANA 815344)。


  94 # TCP listen() backlog.
  95 #
  96 # In high requests-per-second environments you need an high backlog in order
  97 # to avoid slow clients connections issues. Note that the Linux kernel
  98 # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
  99 # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
 100 # in order to get the desired effect.
 101 tcp-backlog 511



backlog队列总和=未完成三次握手队列 + 已完成三次握手队列

 103 # Unix socket.
 104 #
 105 # Specify the path for the Unix socket that will be used to listen for
 106 # incoming connections. There is no default, so Redis will not listen
 107 # on a unix socket when not specified.
 108 #
 109 # unixsocket /tmp/redis.sock
 110 # unixsocketperm 700
 112 # Close the connection after a client is idle for N seconds (0 to disable)
 113 timeout 0



unixsocket /tmp/redis.sock

unixsocketperm 700


timeout 0

# TCP keepalive.
 116 #
 117 # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
 118 # of communication. This is useful for two reasons:
 119 #
 120 # 1) Detect dead peers.
 121 # 2) Take the connection alive from the point of view of network
 122 #    equipment in the middle.
 123 #
 124 # On Linux, the specified value (in seconds) is the period used to send ACKs.
 125 # Note that to close the connection the double of the time is needed.
 126 # On other kernels the period depends on the kernel configuration.
 127 #
 128 # A reasonable value for this option is 300 seconds, which is the new
 129 # Redis default starting with Redis 3.2.1.
 130 tcp-keepalive 300

设置TCP keepalive:

如果非零,在没有通信的情况下,使用SO_KEEPALIVE向客户机发送TCP ACKs。这很有用,原因有二:

(1) 检测死节点。

(2) 站在网络设备的中间,从网络设备的角度进行活连接。在Linux上,指定的值(以秒为单位)是用来发送ACKs的周期。注意,要关闭连接,需要双倍的时间。对于其他内核,周期取决于内核配置。

这个选项的合理值是300秒,这是从Redis 3.2.1开始,新版Redis的默认值。

tcp-keepalive 300

4.5 通用配置

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/ when #daemonized.
daemonize yes


# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options: 
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.

如果您从upstart或systemd运行Redis, Redis可以与您的监控树进行交互。


supervised no:没有监督互动

supervised upstart:通过将redis设置为SIGSTOP模式向upstart发送信号

supervised systemd:通过写入READY=1到$NOTIFY_SOCKET来信号systemd

supervised auto:

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice




verbose (许多很少有用的信息,但不像调试级别那样混乱)




# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""


# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no


默认值为: syslog-enabled no

#Specify the syslog identity.
# syslog-ident redis


Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0


# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16

设置数据库的数量。默认数据库是DB 0,您可以选择

在每个连接的基础上使用SELECT 中的一个不同的


# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes


但是,通过将以下选项设置为yes,可以强制执行4.0之前的行为,并始终在启动日志中显示ASCII art徽标。

默认值:always-show-logo no


Save the DB on disk:
#   save <seconds> <changes>
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#   Note: you can disable saving completely by commenting out all "save" lines.
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#   save ""


save <seconds> <changes>








 save ""


save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
# If the background saving process will start working again Redis will
# automatically allow writes again.
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes





stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes


就是在转存/备份 .rdb文件时是否启用压缩,启用压缩会占用cpu空间,但是没有比要省,占用不了多少。

# The filename where to dump the DB
dbfilename dump.rdb


# The working directory.
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
# The Append Only File will also be created inside this directory.
# Note that you must specify a directory here, not a file name.
dir ./

设置工作目录:DB将在这个目录中编写,使用上面的“dbfilename”配置指令指定的文件名。这个目录中还将创建Append Only文件。注意,这里必须指定一个目录,而不是文件名。




+------------------+      +---------------+
|      Master      | ---> |    Replica    |
| (receive writes) |      |  (exact copy) |
+------------------+      +---------------+

  • Redis 使用异步复制。 从 Redis 2.8 开始, 从服务器会以每秒一次的频率向主服务器报告复制流(replication stream)的处理进度。

  • 一个主服务器可以有多个从服务器。

  • 不仅主服务器可以有从服务器, 从服务器也可以有自己的从服务器, 多个从服务器之间可以构成一个图状结构。

  • 复制功能不会阻塞主服务器: 即使有一个或多个从服务器正在进行初次同步, 主服务器也可以继续处理命令请求。

  • 复制功能也不会阻塞从服务器: 只要在 redis.conf 文件中进行了相应的设置, 即使从服务器正在进行初次同步, 服务器也可以使用旧版本的数据集来处理命令查询。

    不过, 在从服务器删除旧版本数据集并载入新版本数据集的那段时间内, 连接请求会被阻塞。

    你还可以配置从服务器, 让它在与主服务器之间的连接断开时, 向客户端发送一个错误。

  • 复制功能可以单纯地用于数据冗余(data redundancy), 也可以通过让多个从服务器处理只读命令请求来提升扩展性(scalability): 比如说, 繁重的 [SORT key BY pattern] [LIMIT offset count] [GET pattern [GET pattern …]] [ASC | DESC] [ALPHA] [STORE destination] 命令可以交给附属节点去运行。

  • 可以通过复制功能来让主服务器免于执行持久化操作: 只要关闭主服务器的持久化功能, 然后由从服务器去执行持久化操作即可。

4.7.1 关闭主服务器持久化时,复制功能的数据安全

当配置Redis复制功能时,强烈建议打开主服务器的持久化功能。 否则的话,由于延迟等问题,部署的服务应该要避免自动拉起。


  1. 假设节点A为主服务器,并且关闭了持久化。 并且节点B和节点C从节点A复制数据

  2. 节点A崩溃,然后由自动拉起服务重启了节点A. 由于节点A的持久化被关闭了,所以重启之后没有任何数据

  3. 节点B和节点C将从节点A复制数据,但是A的数据是空的, 于是就把自身保存的数据副本删除。

在关闭主服务器上的持久化,并同时开启自动拉起进程的情况下,即便使用Sentinel来实现Redis的高可用性,也是非常危险的。 因为主服务器可能拉起得非常快,以至于Sentinel在配置的心跳时间间隔内没有检测到主服务器已被重启,然后还是会执行上面的数据丢失的流程。


4.7.2 复制功能的运作原理

无论是初次连接还是重新连接, 当建立一个从服务器时, 从服务器都将向主服务器发送一个 SYNC 命令。

接到 SYNC 命令的主服务器将开始执行 BGSAVE , 并在保存操作执行期间, 将所有新执行的写入命令都保存到一个缓冲区里面。

BGSAVE 执行完毕后, 主服务器将执行保存操作所得的 .rdb 文件发送给从服务器, 从服务器接收这个 .rdb 文件, 并将文件中的数据载入到内存中。

之后主服务器会以 Redis 命令协议的格式, 将写命令缓冲区中积累的所有内容都发送给从服务器。

你可以通过 telnet 命令来亲自验证这个同步过程: 首先连上一个正在处理命令请求的 Redis 服务器, 然后向它发送 SYNC 命令, 过一阵子, 你将看到 telnet 会话(session)接收到服务器发来的大段数据(.rdb 文件), 之后还会看到, 所有在服务器执行过的写命令, 都会重新发送到 telnet 会话来。

即使有多个从服务器同时向主服务器发送 SYNC , 主服务器也只需执行一次 BGSAVE 命令, 就可以处理所有这些从服务器的同步请求。

从服务器可以在主从服务器之间的连接断开时进行自动重连, 在 Redis 2.8 版本之前, 断线之后重连的从服务器总要执行一次完整重同步(full resynchronization)操作, 但是从 Redis 2.8 版本开始, 从服务器可以根据主服务器的情况来选择执行完整重同步还是部分重同步(partial resynchronization)。

4.7.3 部分重同步

从 Redis 2.8 开始, 在网络连接短暂性失效之后, 主从服务器可以尝试继续执行原有的复制进程(process), 而不一定要执行完整重同步操作。

这个特性需要主服务器为被发送的复制流创建一个内存缓冲区(in-memory backlog), 并且主服务器和所有从服务器之间都记录一个复制偏移量(replication offset)和一个主服务器 ID (master run id), 当出现网络连接断开时, 从服务器会重新连接, 并且向主服务器请求继续执行原来的复制进程:

  • 如果从服务器记录的主服务器 ID 和当前要连接的主服务器的 ID 相同, 并且从服务器记录的偏移量所指定的数据仍然保存在主服务器的复制流缓冲区里面, 那么主服务器会向从服务器发送断线时缺失的那部分数据, 然后复制工作可以继续执行。

  • 否则的话, 从服务器就要执行完整重同步操作。

    Redis 2.8 的这个部分重同步特性会用到一个新增的 PSYNC master_run_id offset 内部命令, 而 Redis 2.8 以前的旧版本只有 SYNC 命令, 不过, 只要从服务器是 Redis 2.8 或以上的版本, 它就会根据主服务器的版本来决定到底是使用 PSYNC master_run_id offset 还是 SYNC

  • 如果主服务器是 Redis 2.8 或以上版本,那么从服务器使用 PSYNC master_run_id offset 命令来进行同步。

  • 如果主服务器是 Redis 2.8 之前的版本,那么从服务器使用 SYNC 命令来进行同步。

4.7.4 配置

配置一个从服务器非常简单, 只要在配置文件中增加以下的这一行就可以了:

slaveof 6379

当然, 你需要将代码中的 替换成你的主服务器的 IP 和端口号。

另外一种方法是调用 SLAVEOF host port 命令, 输入主服务器的 IP 和端口, 然后同步就会开始:> SLAVEOF 10086

4.7.5 只读从服务器

从 Redis 2.6 开始, 从服务器支持只读模式, 并且该模式为从服务器的默认模式。

只读模式由 redis.conf 文件中的 slave-read-only 选项控制, 也可以通过 CONFIG SET parameter value 命令来开启或关闭这个模式。

只读从服务器会拒绝执行任何写命令, 所以不会出现因为操作失误而将数据不小心写入到了从服务器的情况。

即使从服务器是只读的, DEBUGCONFIG 等管理式命令仍然是可以使用的, 所以我们还是不应该将服务器暴露给互联网或者任何不可信网络。 不过, 使用 redis.conf 中的命令改名选项, 我们可以通过禁止执行某些命令来提升只读从服务器的安全性。

你可能会感到好奇, 既然从服务器上的写数据会被重同步数据覆盖, 也可能在从服务器重启时丢失, 那么为什么要让一个从服务器变得可写呢?

原因是, 一些不重要的临时数据, 仍然是可以保存在从服务器上面的。 比如说, 客户端可以在从服务器上保存主服务器的可达性(reachability)信息, 从而实现故障转移(failover)策略。

4.7.6 从服务器相关配置

如果主服务器通过 requirepass 选项设置了密码, 那么为了让从服务器的同步操作可以顺利进行, 我们也必须为从服务器进行相应的身份验证设置。

对于一个正在运行的服务器, 可以使用客户端输入以下命令:

config set masterauth <password>

要永久地设置这个密码, 那么可以将它加入到配置文件中:

masterauth <password>

另外还有几个选项, 它们和主服务器执行部分重同步时所使用的复制流缓冲区有关, 详细的信息可以参考 Redis 源码中附带的 redis.conf 示例文件。

4.7.7主服务器只在有至少 N 个从服务器的情况下,才执行写操作

从 Redis 2.8 开始, 为了保证数据的安全性, 可以通过配置, 让主服务器只在有至少 N 个当前已连接从服务器的情况下, 才执行写命令。

不过, 因为 Redis 使用异步复制, 所以主服务器发送的写数据并不一定会被从服务器接收到, 因此, 数据丢失的可能性仍然是存在的。


  • 从服务器以每秒一次的频率 PING 主服务器一次, 并报告复制流的处理情况。
  • 主服务器会记录各个从服务器最后一次向它发送 PING 的时间。
  • 用户可以通过配置, 指定网络延迟的最大值 min-slaves-max-lag , 以及执行写操作所需的至少从服务器数量 min-slaves-to-write

如果至少有 min-slaves-to-write 个从服务器, 并且这些服务器的延迟值都少于 min-slaves-max-lag秒, 那么主服务器就会执行客户端请求的写操作。

你可以将这个特性看作 CAP 理论中的 C 的条件放宽版本: 尽管不能保证写操作的持久性, 但起码丢失数据的窗口会被严格限制在指定的秒数中。

另一方面, 如果条件达不到 min-slaves-to-writemin-slaves-max-lag 所指定的条件, 那么写操作就不会被执行, 主服务器会向请求执行写操作的客户端返回一个错误。


  • min-slaves-to-write <number of slaves>
  • min-slaves-max-lag <number of seconds>


Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
# requirepass foobared

# Command renaming.
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
# Example:
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
# It is also possible to completely kill a command by renaming it into
# an empty string:
# rename-command CONFIG ""
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.




默认情况下 requirepass 参数是空的,这就意味着你无需通过密码验证就可以连接到 redis 服务。


requirepass foobar

也可以通过命令来修改:> CONFIG set requirepass "foobar"
OK> CONFIG get requirepass
1) "requirepass"
2) "foobar"

设置密码后,客户端连接 redis 服务就需要密码验证,否则无法执行命令。

AUTH 命令基本语法格式如下:> AUTH password
#例如> AUTH "foobar"




rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52


rename-command CONFIG ""


4.9 CLIENTS客户端数量

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.

#maxclients 10000

自Redis2.6以后,允许使用者在配置文件(Redis.conf)maxclients属性上修改客户端连接的最大数,也可以通过在Redis-cli工具上输入config set maxclients 去设置最大连接数。根据连接数负载的情况,这个数字应该设置为预期连接数峰值的110%到150之间,若是连接数超出这个数字后,Redis会拒绝并立刻关闭新来的连接。通过设置最大连接数来限制非预期数量的连接数增长,是非常重要的。另外,新连接尝试失败会返回一个错误消息,这可以让客户端知道,Redis此时有非预期数量的连接数,以便执行对应的处理措施。


# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
# LRU means Least Recently Used
# LFU means Least Frequently Used
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.

# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
# The default is:
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).

# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
# replica-ignore-maxmemory yes




maxmemory <bytes>
maxmemory 100mb


  • 根据配置的数据淘汰策略尝试淘汰数据,释放空间
  • 如果没有数据可以淘汰,或者没有配置数据淘汰策略,那么Redis会对所有写请求返回错误,但读请求仍然可以正常执行


  • 如果采用了Redis的主从复制,主节点向从节点同步数据时,会占用掉一部分内存空间,如果maxmemory过于接近主机的可用内存,导致数据同步时内存不足。所以设置的maxmemory不要过于接近主机可用的内存,留出一部分预留用作主从同步。



  • volatile-lru:使用LRU算法进行数据淘汰(淘汰上次使用时间最早的,且使用次数最少的key),只淘汰设定了有效期的key
  • allkeys-lru:使用LRU算法进行数据淘汰,所有的key都可以被淘汰
  • volatile-random:随机淘汰数据,只淘汰设定了有效期的key
  • allkeys-random:随机淘汰数据,所有的key都可以被淘汰
  • volatile-ttl:淘汰剩余有效期最短的key




maxmemory-policy volatile-lru #默认是noeviction,即不进行数据淘汰

4.11 LAZY FREEING 惰性删除

# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
#    in order to make room for new data, without going over the specified
#    memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
#    EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
#    already exist. For example the RENAME command may delete the old key
#    content when it is replaced with another one. Similarly SUNIONSTORE
#    or SORT with STORE option may delete existing keys. The SET command
#    itself removes any old content of the specified key in order to replace
#    it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
#    its master, the content of the whole database is removed in order to
#    load the RDB file just transferred.
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

redis4.0新增了lazy free特性,lazy free可译为惰性删除或延迟释放;当删除键的时候,redis提供异步延时释放key内存的功能,把key释放操作放在bio(Background I/O)单独的子线程处理中,减少删除big key对redis主线程的阻塞。有效地避免删除big key带来的性能和可用性问题。



Sorted Set~100万~1000ms

lazy free的使用分为2类:


第二类是过期key删除、maxmemory key驱逐淘汰删除。

主动删除键使用lazy free


UNLINK命令是与DEL一样删除key功能的lazy free实现。
示例如下:使用UNLINK命令删除一个大键mylist, 它包含200万个元素,但用时只有0.03毫秒

redis> LLEN mylist
(integer) 2000000
redis> UNLINK mylist
(integer) 1
redis> SLOWLOG get
1) 1) (integer) 1
   2) (integer) 1505465188
   3) (integer) 30
   4) 1) "UNLINK"
      2) "mylist"
   5) ""
   6) ""




redis> flushall  //同步清理实例数据,180万个key耗时1020毫秒
redis> DBSIZE
(integer) 1812637
redis> flushall async  //异步清理实例数据,180万个key耗时约9毫秒
redis> SLOWLOG get
 1) 1) (integer) 2996109
    2) (integer) 1505465989
    3) (integer) 9274       //指令运行耗时9.2毫秒
    4) 1) "flushall" 
       2) "async"
    5) ""
    6) ""

被动删除键使用lazy free

lazy free应用于被动删除中,目前有4种场景,每种场景对应一个配置参数; 默认都是关闭。

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no

针对redis内存使用达到maxmeory,并设置有淘汰策略的情况下,决定在被动淘汰键时,是否采用lazy free机制;
因为此场景开启lazy free, 可能使用淘汰键的内存释放不及时,导致redis内存超用,超过maxmemory的限制。此场景使用时,请结合业务测试。


针对设置有TTL的键,达到过期后,被redis清理删除时是否采用lazy free机制;


针对有些指令在处理已存在的键时,会带有一个隐式的DEL键的操作。如rename命令,当目标键已存在,redis会先删除目标键,如果这些目标键是一个big key,那就会引入阻塞删除的性能问题。 此参数设置就是解决这类问题,建议可开启。



lazy free的监控

lazy free能监控的数据指标,只有一个值:lazyfree_pending_objects,表示redis执行lazy free操作,在等待被实际回收内容的键个数。并不能体现单个大键的元素个数或等待lazy free回收的内存大小。
所以此值有一定参考值,可监测redis lazy free的效率或堆积键数量; 比如在flushall async场景下会有少量的堆积。


# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
# Please check for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# 原文2:fsync()解释
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
# Redis supports three different modes:
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
# More details please check the following article:
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

no-appendfsync-on-rewrite no

# 原文4:自动重写append only文件的配置
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# 原文5:AOF文件在Redis启动过程中被截断时该做何处理
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#   [RDB file][AOF tail]
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes

Redis的持久化存储提供两种方式:RDB与AOF。RDB是默认配置。AOF需要手动开启。 默认是关闭AOF模式的。


appendonly yes  #开启AOF模式 原文1
appendfilename "appendonly.aof" #保存数据的AOF文件名称 原文1

# appendfsync always
appendfsync everysec    #fsync模式    原文2
# appendfsync no

no-appendfsync-on-rewrite no    #原文3

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb  #原文4

aof-load-truncated yes  #原文5











这意味着有另一个子进程在存储时,Redis的持久性等同于“appendfsync none”。在实践中,意味着在最坏的情况下它可能丢失多达30秒的日志(默认的Linux设置)。



自动重写append only文件。










# Max execution time of a Lua script in milliseconds.
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000


当长时间运行的脚本超过最大执行时间时,只有SCRIPT KILLSHUTDOWN NOSAVE是可用的。第一种方法可用于停止尚未调用write命令的脚本。第二种方法是在脚本已经发出写命令但用户不想等待脚本自然终止的情况下关闭服务器的唯一方法。


Lua 脚本功能是 Reids 2.6 版本的最大亮点, 通过内嵌对 Lua 环境的支持, Redis 解决了长久以来不能高效地处理 CAS (check-and-set)命令的缺点, 并且可以通过组合使用多个命令, 轻松实现以前很难实现或者不能高效实现的模式。


Lua 是一种轻量小巧的脚本语言,用标准C语言编写并以源代码形式开放, 其设计目的是为了嵌入应用程序中,从而为应用程序提供灵活的扩展和定制功能。其设计目的是为了嵌入应用程序中,从而为应用程序提供灵活的扩展和定制功能。


  • 轻量级: 它用标准C语言编写并以源代码形式开放,编译后仅仅一百余K,可以很方便的嵌入别的程序里。
  • 可扩展: Lua提供了非常易于使用的扩展接口和机制:由宿主语言(通常是C或C++)提供这些功能,Lua可以使用它们,就像是本来就内置的功能一样。
  • 其它特性:
    • 支持面向过程(procedure-oriented)编程和函数式编程(functional programming);
    • 自动内存管理;只提供了一种通用类型的表(table),用它可以实现数组,哈希表,集合,对象;
    • 语言内置模式匹配;闭包(closure);函数也可以看做一个值;提供多线程(协同进程,并非操作系统所支持的线程)支持;
    • 通过闭包和table可以很方便地支持面向对象编程所需要的一些关键机制,比如数据抽象,虚函数,继承和重载等。


在初始化 Redis 服务器时, 对 Lua 环境的初始化也会一并进行。

为了让 Lua 环境符合 Redis 脚本功能的需求, Redis 对 Lua 环境进行了一系列的修改, 包括添加函数库、更换随机函数、保护全局变量, 等等。

整个初始化 Lua 环境的步骤如下:

  1. 调用 lua_open 函数,创建一个新的 Lua 环境。

  2. 载入指定的 Lua 函数库,包括:

  3. 屏蔽一些可能对 Lua 环境产生安全问题的函数,比如 loadfile

  4. 创建一个 Redis 字典,保存 Lua 脚本,并在复制(replication)脚本时使用。字典的键为 SHA1 校验和,字典的值为 Lua 脚本。

  5. 创建一个


    全局表格到 Lua 环境,表格中包含了各种对 Redis 进行操作的函数,包括:

    • 用于执行 Redis 命令的 redis.callredis.pcall 函数。

    • 用于发送日志(log)的



      • redis.LOG_DEBUG
      • redis.LOG_VERBOSE
      • redis.LOG_NOTICE
      • redis.LOG_WARNING
    • 用于计算 SHA1 校验和的 redis.sha1hex 函数。

    • 用于返回错误信息的 redis.error_reply 函数和 redis.status_reply 函数。

  6. 用 Redis 自己定义的随机生成函数,替换 math 表原有的 math.random 函数和 math.randomseed 函数,新的函数具有这样的性质:每次执行 Lua 脚本时,除非显式地调用 math.randomseed ,否则 math.random 生成的伪随机数序列总是相同的。

  7. 创建一个对 Redis 多批量回复(multi bulk reply)进行排序的辅助函数。

  8. 对 Lua 环境中的全局变量进行保护,以免被传入的脚本修改。

  9. 因为 Redis 命令必须通过客户端来执行,所以需要在服务器状态中创建一个无网络连接的伪客户端(fake client),专门用于执行 Lua 脚本中包含的 Redis 命令:当 Lua 脚本需要执行 Redis 命令时,它通过伪客户端来向服务器发送命令请求,服务器在执行完命令之后,将结果返回给伪客户端,而伪客户端又转而将命令结果返回给 Lua 脚本。

  10. 将 Lua 环境的指针记录到 Redis 服务器的全局状态中,等候 Redis 的调用。

以上就是 Redis 初始化 Lua 环境的整个过程, 当这些步骤都执行完之后, Redis 就可以使用 Lua 环境来处理脚本了。

严格来说, 步骤 1 至 8 才是初始化 Lua 环境的操作, 而步骤 9 和 10 则是将 Lua 环境关联到服务器的操作, 为了按顺序观察整个初始化过程, 我们将两种操作放在了一起。

另外, 步骤 6 用于创建无副作用的脚本, 而步骤 7 则用于去除部分 Redis 命令中的不确定性(non deterministic)。





local key=KEYS[1]


return list;


lpush person mary jack peter


redis> redis-cli --eval /opt/testdata/test.lua person
1) "mary"
2) "jack"
3) "peter"


# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
# cluster-node-timeout 15000

# A replica of a failing master will avoid to start a failover if its data
# looks too old.
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
# 1) If there are multiple replicas able to failover, they exchange messages
#    in order to try to give an advantage to the replica with the best
#    replication offset (more data from the master processed).
#    Replicas will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
# 2) Every single replica computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the replica will not try to failover
#    at all.
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#   (node-timeout * replica-validity-factor) + repl-ping-replica-period
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
# cluster-replica-validity-factor 10
# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
# cluster-migration-barrier 1

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
# cluster-require-full-coverage yes

# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
# cluster-replica-no-failover no

# In order to setup your cluster make sure to read the documentation
# available at web site.

redis cluster配置

cluster-enabled yes


cluster-config-file nodes-6379.conf


cluster-node-timeout 15000


cluster-slave-validity-factor 10


cluster-migration-barrier 1


cluster-require-full-coverage yes

在部分key所在的节点不可用时,如果此参数设置为"yes"(默认值), 则整个集群停止接受操作;如果此参数设置为”no”,则集群依然为可达节点上的key提供读操作。


# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
# Example:
# cluster-announce-ip
# cluster-announce-port 6379
# cluster-announce-bus-port 6380





如果不使用上述选项,则使用常规的Redis集群自动检测。注意,在重新映射时,总线端口可能不在客户机port + 10000的固定偏移量上,因此可以根据重新映射的方式指定任何端口和总线端口。如果没有设置总线端口,将像往常一样使用10000的固定偏移量。


cluster-announce-port 6379
cluster-announce-bus-port 6380


# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128

reids 慢查询日志:



  1. 发送命令
  2. 命令排队
  3. 命令执行
  4. 返回结果




  • 预设阈值怎么设置?
  • 慢查询记录存放在那?

Redis提供了slowlog-log-slower-thanslowlog-max-len配置来解决这两个问题.从字面意思就可以看出,slowlog-log-slower-than就是这个预设阈值,它的单位是毫秒(1秒=1000000微秒)默认值是10000,假如执行了一条"很慢"的命令(例如key *),如果执行时间超过10000微秒,那么它将被记录在慢查询日志中.



Redis中有两种修改配置的方法,一种是修改配置文件,另一种是使用config set命令动态修改.例如下面使用config set命令将slowlog-log-slower-than设置为20000微妙.slowlog-max-len设置为1024

config set slowlog-log-slower-than 20000
config set slowlog-max-len 1024
config rewrite



slowlog get [n] #可选参数 n可以指定条数.


slowlog len


slowlog reset



  • slowlog-max-len:线上建议调大慢查询列表,记录慢查询时Redis会对长命令做阶段操作,并不会占用大量内存.增大慢查询列表可以减缓慢查询被剔除的可能,例如线上可设置为1000以上.
  • slowlog-log-slower-than:默认值超过10毫秒判定为慢查询,需要根据Redis并发量调整该值.由于Redis采用单线程相应命令,对于高流量的场景,如果命令执行时间超过1毫秒以上,那么Redis最多可支撑OPS不到1000因此对于高OPS场景下的Redis建议设置为1毫秒.
  • 慢查询只记录命令的执行时间,并不包括命令排队和网络传输时间.因此客户端执行命令的时间会大于命令的实际执行时间.因为命令执行排队机制,慢查询会导致其他命令级联阻塞,因此客户端出现请求超时时,需要检查该时间点是否有对应的慢查询,从而分析是否为慢查询导致的命令级联阻塞.
  • 由于慢查询日志是一个先进先出的队列,也就是说如果慢查询比较多的情况下,可能会丢失部分慢查询命令,为了防止这种情况发生,可以定期执行slowlog get命令将慢查询日志持久化到其他存储中(例如:MySQLElasticSearch等),然后可以通过可视化工具进行查询.


# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0

redis在2.8.13版本引入了latency monitoring,这里主要是监控latency spikes(延时毛刺)。它基于事件机制进行监控,command事件是监控命令执行latency,fast-command事件是监控时间复杂度为O(1)及O(logN)命令的latency,fork事件则监控redis执行系统调用fork(2)的latency。

默认情况下,延迟监视是禁用的,因为如果没有延迟问题,基本上不需要它,而且收集数据对性能有影响,虽然影响很小,但是可以在大负载下测量。如果需要,可以在运行时使用“CONFIG SET lat- monitor-threshold”命令轻松启用延迟监视。

设置/开启latency monitor:> config set latency-monitor-threshold 100

读取latency monitor配置> config get latency-monitor-threshold
1) "latency-monitor-threshold"
2) "100"

获取最近的latency> latency latest #返回事件名、发生的时间戳、最近的延时(毫秒)、最大的延时(毫秒)
1) 1) "command"
   2) (integer) 1537268070
   3) (integer) 250
   4) (integer) 1010

查看某一事件的延时历史> latency history command
1) 1) (integer) 1537268064
   2) (integer) 1010
2) 1) (integer) 1537268070
   2) (integer) 250

查看事件延时图> latency graph command
command - high 500 ms, low 100 ms (all time high 500 ms)


重置/清空事件数据> latency reset command
(integer) 1

诊断建议> latency doctor
Dave, I have observed latency spikes in this Redis instance. You don't mind talking about it, do you Dave?
1. command: 6 latency spikes (average 257ms, mean deviation 142ms, period 3.83 sec). Worst all time event 500ms.
I have a few advices for you:
- Check your Slow Log to understand what are the commands you are running which are too slow to execute. Please check for more information.
- Deleting, expiring or evicting (because of maxmemory policy) large objects is a blocking operation. If you have very large objects that are often deleted, expired, or evicted, try to fragment those objects into multiple smaller objects.


# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#  The "notify-keyspace-events" takes as argument a string that is composed
#  of zero or multiple characters. The empty string means that notifications
#  are disabled.
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#  notify-keyspace-events Elg
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#  notify-keyspace-events Ex
#  By default all notifications are disabled because most users don't need
#  this feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""

键空间通知使得客户端可以通过订阅频道或模式, 来接收那些以某种方式改动了 Redis 数据集的事件。

​ 因为 Redis 目前的订阅与发布功能采取的是发送即忘(fire and forget)策略, 所以如果你的程序需要可靠事件通知(reliable notification of events), 那么目前的键空间通知可能并不适合你:当订阅事件的客户端断线时, 它会丢失所有在断线期间分发给它的事件。并不能确保消息送达。


​ 对于每个修改数据库的操作,键空间通知都会发送两种不同类型的事件消息:keyspace 和 keyevent。以 keyspace 为前缀的频道被称为键空间通知(key-space notification), 而以 keyevent 为前缀的频道则被称为键事件通知(key-event notification)。

​ 事件是用 keyspace@DB:KeyPattern 或者 keyevent@DB:OpsType 的格式来发布消息的。

​ DB表示在第几个库;KeyPattern则是表示需要监控的键模式(可以用通配符);OpsType则表示操作类型。因此,如果想要订阅特殊的Key上的事件,应该是订阅keyspace。

​ 比如说,对 0 号数据库的键 mykey 执行 DEL 命令时, 系统将分发两条消息, 相当于执行以下两个 PUBLISH 命令:

PUBLISH __keyspace@0__:sampleKey del
PUBLISH __keyevent@0__:del sampleKey

​ 订阅第一个频道 keyspace@0:mykey 可以接收 0 号数据库中所有修改键 mykey 的事件, 而订阅第二个频道 keyevent@0:del 则可以接收 0 号数据库中所有执行 del 命令的键。


​ 键空间通知通常是不启用的,因为这个过程会产生额外消耗。所以在使用该特性之前,请确认一定是要用这个特性的,然后修改配置文件。相关配置项如下:

K键空间通知,所有通知以 keyspace@ 为前缀,针对Key
E键事件通知,所有通知以 keyevent@ 为前缀,针对event
gDEL 、 EXPIRE 、 RENAME 等类型无关的通用命令的通知通用操作或事件
e驱逐(evict)事件:每当有键因为 maxmemory 政策而被删除时发送通用操作或事件
A参数 g$lshzxe 的别名,相当于是All

输入的参数中至少要有一个 K 或者 E , 否则的话, 不管其余的参数是什么, 都不会有任何通知被分发。

Redis 使用以下两种方式删除过期的键:

  • 当一个键被访问时,程序会对这个键进行检查,如果键已经过期,那么该键将被删除。
  • 底层系统会在后台渐进地查找并删除那些过期的键,从而处理那些已经过期、但是不会被访问到的键。

当过期键被以上两个程序的任意一个发现、 并且将键从数据库中删除时, Redis 会产生一个 expired 通知。
Redis 并不保证生存时间(TTL)变为 0 的键会立即被删除: 如果程序没有访问这个过期键, 或者带有生存时间的键非常多的话, 那么在键的生存时间变为 0 , 直到键真正被删除这中间, 可能会有一段比较显著的时间间隔。
因此, Redis 产生 expired 通知的时间为过期键被删除的时候, 而不是键的生存时间变为 0 的时候。


# 当哈希有少量条目且最大的条目不超过给定的阈值时,使用内存有效的数据结构对哈希进行编码。可以使用以下指令配 # 置这些阈值。
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# 列表也以一种特殊的方式编码,以节省大量空间。
# 可以指定每个内部列表节点允许的条目数作为固定的最大大小或元素的最大数量。
# 对于固定的最大大小,使用-5到-1,表示:
# -5: max size: 64 Kb  <-- 不建议用于正常工作负载
# -4: max size: 32 Kb  <-- 不推荐
# -3: max size: 16 Kb  <-- 也不是很推荐
# -2: max size: 8 Kb   <-- 推荐
# -1: max size: 4 Kb   <-- 推荐
# 正数表示每个列表节点上存储的元素数精确到_exactly_
# 性能最好的选项通常是-2 (8 Kb大小)或-1 (4 Kb大小),但如果您的用例是惟一的,则根据需要调整设置。
list-max-ziplist-size -2

# 列表也可以压缩
# 压缩深度是指从列表的*每个*边到*排除*压缩的quicklist ziplist节点的数量。对于快速的push/pop操作,列表# 的头部和尾部总是未压缩的。有如下选项:
# 0: 禁用所有列表压缩
# 1: 深度为1的意思是“在列表中从第一个1个节点之后才开始压缩,
#    要么从头部开始,要么从尾部开始
#    list表示为: [head]->node->node->...->node->[tail]
#    可以看出[head]、[tail]永远不会被压缩;内部节点将被压缩。
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
#    这里的意思是:不要压缩head【不压缩头】或head->next【不压缩头的下一个节点】或tail->prev【不压缩尾 #	   的前一个节点】或tail【不压缩尾】,而是压缩它们之间的所有节点。
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# 	 解释同2,以此类推
list-compress-depth 0

# Sets在一种情况下有一种特殊的编码:当集合由以10为基数的字符串组成时,这些字符串恰好是64位带符号整数范围内 # 的整数。
# 下面的配置设置sets集合大小的限制,以便使用这种特殊的内存保存编码。
set-max-intset-entries 512

# 与hashes和lists类似,为了节省大量空间,还对排序集进行了特殊编码。这种编码只在排序集的长度和元素低于以下 # 限制时使用:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog稀疏表示字节限制。限制包括16个字节的头。当使用稀疏表示的HyperLogLog超过此限制时,它将被转 # 换为稠密表示。
# 大于16000的值是完全无用的,因为此时密集表示更有效。
# 建议值为~3000,以在不降低太多PFADD(稀疏编码的O(N))的情况下,获得空间高效编码的优点。如果不关心CPU,但 # 关心空间,并且数据集由许多基数在0 - 15000范围内的HyperLogLogs组成,则可以将该值提高到~10000。
hll-sparse-max-bytes 3000

# 流宏节点的最大大小/项。流数据结构是由大节点组成的基数树,其中包含多个编码项。使用此配置可以配置单个节点的 # 大小(以字节为单位),以及在添加新的流条目时切换到新节点之前它可能包含的最大项数。如果将下列任何设置,设置为 # 零,则忽略该限制,因此,可以通过将max-bytes设置为0并将max-entries设置为所需的值来仅设置max entires # 限制。
stream-node-max-bytes 4096
stream-node-max-entries 100

# 很难翻译:Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
# 如果不太确定:
# 如果您有硬延迟需求,并且在您的环境中,Redis能够以2毫秒的延迟不时地回复查询,这不是一件好事,那么请使   # 用“activerehashing no”。
# 如果你没有这么严格的要求,可以使用“activerehashing yes” 尽快释放内存。
activerehashing yes

# 客户机输出缓冲区限制可用于强制断开从服务器读取数据的速度不够快的客户机(一个常见的原因是发布/订阅客户机不 # 能像发布服务器生成消息那样快速地使用消息)。
# 对于三类不同的客户端,可以设置不同的限制:
# normal -> 普通客户端包括MONITOR客户端
# replica  -> 副本客户端
# pubsub -> 客户至少订阅了一个pubsub通道或模式
# 每个client-output-buffer-limit指令的语法如下:
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
# 一旦达到硬限制,或者达到软限制并持续达到指定的秒数(连续),客户机将立即断开连接。
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
# 默认情况下,普通客户端并不受限制,因为它们在没有请求(以push方式)的情况下不会接收数据,而是在请求之后才接 # 收数据,因此只有异步客户端可能会创建一个场景,其中数据的请求速度比读取速度快。
# 相反,对于pubsub和副本客户机有一个默认限制,因为订阅者和副本以推送方式接收数据。
# 通过将它们设置为零,可以禁用硬限制或软限制。
# 总而言之就是设置正常情况下客户端输出缓冲区的限制,以及备份的限制和订阅模式的输出缓冲区限制
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
# 客户端查询缓冲区限制
# client-query-buffer-limit 1gb

# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
# 设置协议最大体长
# proto-max-bulk-len 512mb

# Redis调用一个内部函数来执行许多后台任务,比如在超时时关闭客户机连接,清除从未被请求的过期密钥,等等。
# 并不是所有的任务都以相同的频率执行,但是Redis根据指定的“hz”值检查要执行的任务。
# 默认情况下,“hz”被设置为10。当Redis空闲时,提高值将使用更多的CPU,但同时也会使Redis在同时有许多键过期 # 时响应更快,并且可以更精确地处理超时。
# 范围在1到500之间,但是值超过100通常不是一个好主意。大多数用户应该使用默认值10,只有在需要非常低延迟的环 # 境中才将其提高到100。
hz 10

# 通常,HZ值与连接的客户机数量成正比是有用的。这很有用,例如,为了避免每次后台任务调用处理太多的客户机,从 # 而避免延迟高峰。
# 由于默认HZ值默认被保守地设置为10,因此Redis提供了使用自适应HZ值的能力,当有许多连接的客户端时,这种自适 # 应HZ值将临时提高。
# 当启用动态HZ时,将使用实际配置的HZ作为基线,但在连接更多客户机时,将根据需要实际使用配置的HZ值的倍数。这 # 样,一个空闲的实例将使用非常少的CPU时间,而一个繁忙的实例将响应更快。
dynamic-hz yes

# 当一个子文件重写AOF文件时,如果启用了以下选项,该文件将每生成32MB数据进行一次fsync-ed。这对于更增量地 # 将文件提交到磁盘并避免较大的延迟峰值非常有用。
aof-rewrite-incremental-fsync yes

#当redis保存RDB文件时,如果启用了以下选项,那么该文件将每生成32MB的数据进行一次fsync-ed。这对于更增量地# 将文件提交到磁盘并避免较大的延迟峰值非常有用。
rdb-save-incremental-fsync yes

# 可以调优Redis LFU清除(参见maxmemory设置)。但是,最好从默认设置开始,只有在研究了如何改进性能和键LFU # 如何随时间变化(可以通过OBJECT FREQ命令进行检查)之后才更改它们。
# 在Redis LFU实现中有两个可调参数:计数器对数因子和计数器衰减时间。在更改这两个参数之前,理解它们的含义是 # 很重要的。
# LFU计数器仅为每个键8位,其最大值为255,因此Redis使用具有对数行为的概率增量。给定旧计数器的值,当访问一# 个键时,计数器按如下方式递增:
# 1. 提取0到1之间的随机数R。
# 2. 概率P计算为1/(old_value*lfu_log_factor+1)
# 3. 计数器只有在R < P时才递增。
# 默认的lfu-log因子是10。这是一个频率计数器如何随着不同的访问次数和不同的对数因子变化的表:
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
# +--------+------------+------------+------------+------------+------------+
# | 0      | 104        | 255        | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 1      | 18         | 49         | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 10     | 10         | 18         | 142        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 100    | 8          | 11         | 49         | 143        | 255        |
# +--------+------------+------------+------------+------------+------------+
# 注:上表是通过运行以下命令得到的:
#   redis-benchmark -n 1000000 incr foo
#   redis-cli object freq foo
# 注2:计数器的初始值为5,以便让新对象有机会累积命中。
# 计数器衰减时间是键计数器除以2(如果值小于<= 10,则衰减)所必须经过的时间,单位为分钟。
# lfu- decaytime的默认值是1。一个特殊的值0表示每次扫描计数器时计数器都会衰减。
# lfu-log-factor 10
# lfu-decay-time 1



# 警告:这个特性是实验性的。然而,即使在生产过程中,它也要经过压力测试,并由多位工程师进行了一段时间的手工测# 试。
# 什么是主动碎片整理?
# -------------------------------
# 活动(在线)碎片整理允许Redis服务器压缩内存中较小的数据分配和释放位置之间的空间,从而允许回收内存。
# 碎片是每个分配器和某些工作负载都会发生的自然过程(但幸运的是,对于Jemalloc就不那么常见了)。通常需要重新 # 启动服务器来降低碎片,或者至少清除所有数据并重新创建它。然而,由于Oran Agra为Redis 4.0实现的这个特   # 性,这个过程可以在运行时以一种“热”的方式发生,同时服务器正在运行。
# 分裂时基本上超过一定水平(见下面的配置选项)复述,将利用特定Jemalloc特性在连续的内存区域创建新副本的值
# (以便了解分配是否会导致碎片,并将其分配到更好的位置),同时,将释放旧的数据副本。对所有键递增地重复此过程以# 便碎片回落到正常值。
# 需要了解的重要事项:
# 1. 默认情况下,此功能是禁用的,并且只有在编译Redis时使用我们附带的Jemalloc的副本(附带Redis的源代码) #    才能工作。这是Linux构建的默认设置。
# 2. 如果没有碎片问题,则永远不需要启用此功能。
# 3. 一旦您遇到了碎片问题,您可以在需要时使用“CONFIG SET activedefrag yes”命令启用此功能。
# 配置参数能够微调碎片整理过程的行为。如果您不确定它们的意思,最好不要使用缺省值。
# 启用碎片整理
# activedefrag yes

# 启动主动碎片整理的碎片浪费的最小数量
# active-defrag-ignore-bytes 100mb

# 启动主动碎片整理的最小碎片百分比
# active-defrag-threshold-lower 10

# 最大限度的碎片处理百分比
# active-defrag-threshold-upper 100

# 在CPU百分比中进行碎片整理的最小工作量
# active-defrag-cycle-min 5

# 整理磁盘碎片占cpu百分比的最大值
# active-defrag-cycle-max 75

# 从主字典扫描能够处理的set/hash/zset/list字段的最大数量
# active-defrag-max-scan-fields 1000

5. 常用配置redis.conf介绍



  1. redis默认不是以守护进程的方式运行,可以通过该配置修改,使用yes启用守护进程

    daemonize no
  2. 当redis以守护进程方式运行时,redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定:

    pidfile /var/run/
  3. 指定redis监听端口,弄人端口6379,作者在自己的一片博文中解释了为什么选用6379作为默认端口,因为6379在手机按键上MERZ对应的号码,MERZ取自意大利女歌手Alessia Merz的名字

    port 6379
  4. 绑定的主机地址:

  5. 当客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能:

    timeout 300
  6. 单位为秒,如果设置为0,则不会进行Keepalive检测,建议设置成60

    tcp-Keepalive 60
  7. 指定日志记录级别,redis总共支持四个级别:debug、verbose、notice、waring,,默认verbose

    loglevel verbose
  8. 日志记录,默认为标准输出,如果配置redis为守护进程方式运行,而这里又配置记录方式为标准输出,则日志将会发送给/dev/null

    logfile stdout
  9. 设置数据库的数量,默认数据库为0,可以使用select 命令在连接上指定数据库id

    databases 16
  10. 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配置

   save <seconds> <changes>


   save 900 1
   save 300 10
   save 60 10000


  1. tcp-backlog设置tcp的backlog,backlog其实是一个连接队列,backlog队列总和=未完成三次握手队列+已经完成三次握手队列。

    tcp-backlog 511
  2. Syslog-enabled是否把日志输出到syslog中,默认为no

    Syslog-enabled no
  3. Syslog-ident指定syslog里的日志标志,默认:redis

    syslog-ident "redis"
  4. Syslog-facility指定syslog设备,值可以是USER或LOCAL0-LOCAL7

  5. 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大

    rdbcompression yes
  6. 指定本地数据库文件名,默认值为dump.rdb

    dbfilename dump.rdb
  7. 指定本地数据库存放目录

    dir ./
  8. 设置当本机为slav服务时,设置master服务的IP地址及端口,在Redis启动时,它会自动从master进行数据同步

    slaveof <masterip><masterport>
  9. .当master服务设置了密码保护时,slave服务连接master的密码

    masterauth <master-password>
  10. .设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过AUTH命令提供密码,默认关闭

    requirepass foobared
  11. 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,如果设置maxclients0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max number of clients reached错误信息

    maxclients 128
  12. 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。Redis新的vm机制,会把Key存放内存,Value会存放在swap区

    maxmemory <bytes>
  13. 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no

    appendonly no
  14. 指定更新日志文件名,默认为appendonly.aof

    appendfilename appendonly.aof
  15. 指定更新日志条件,共有3个可选值:

  • no:表示等操作系统进行数据缓存同步到磁盘(快)
  • always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全)
  • everysec:表示每秒同步一次(折衷,默认值)appendfsync everysec
  1. 指定是否启用虚拟内存机制,默认值为no,简单的介绍一下,VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中(在后面的文章我会仔细分析Redis的VM机制)

    vm-enabled no
  2. 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享

  3. 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的(Redis的索引数据就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0

    vm-max-memory 0
  4. Redis swap文件分成了很多的page,一个对象可咪保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的数据大小来设定的,作者建议如果存储很多小对象,page大小最好设置为32或者64bytes;如果存储很大大对象,则可以使用更大的page,如果不确定,就使用默认值

    vm-page-size 32
  5. 25.设置swap文件中的page数量,由于页表(一种表示页面空闲或使用的bitmap)是在放在内存中的,,在磁盘上每8个pages将消耗1byte的内存。

    vm-pages 134217728
  6. 设置访问swap文件的线程数,最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟。默认值为4

    vm-max-threads 4
  7. 设置在向客户端应答时,是否把较小的包合并为一个包发送,默认为开启

    glueoutputbuf yes
  8. 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法

    hash-max-zipmap-entries 64
    hash-max-zipmap-value 512
  9. 指定是否激活重置哈希,默认为开启(后面在介绍Redis的哈希算法时具体介绍)

    activerehashing yes
  10. 指定包含其它的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各个实例又拥有自己的特定配置文件

    include /path/to/local.conf







save 秒钟与操作次数

Save the DB on disk:
#   save <seconds> <changes>
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed




Redis 事务可以一次执行多个命令, 并且带有以下两个重要的保证:

  • 批量操作在发送 EXEC 命令前被放入队列缓存。
  • 收到 EXEC 命令后进入事务执行,事务中任意命令执行失败,其余的命令依然被执行。
  • 在事务执行过程,其他客户端提交的命令请求不会插入到事务执行命令序列中。


  • 开始事务(MULTI)。
  • 命令入队。
  • 执行事务(EXEC)。

在Redis的事务里面,采用的是乐观锁,主要是为了提高性能,减少客户端的等待。由几个命令构成:WATCH, UNWATCH, MULTI, EXEC, DISCARD。

下表列出了 redis 事务的相关命令:

1DISCARD 取消事务,放弃执行事务块内的所有命令。
2EXEC 执行所有事务块内的命令。
3MULTI 标记一个事务块的开始。
4UNWATCH 取消 WATCH 命令对所有 key 的监视。
5[WATCH key key ...] 监视一个(或多个) key ,如果在事务执行之前这个(或这些) key 被其他命令所改动,那么事务将被打断。

7.1 Redis事务的过程解释:


Redis以MULTI 命令的执行标记着事务的开始:

这个命令唯一做的就是, 将客户端的 REDIS_MULTI 选项打开, 让客户端从非事务状态切换到事务状态。


当客户端处于非事务状态下时, 所有发送给服务器端的命令都会立即被服务器执行。

但是, 当客户端进入事务状态之后, 服务器在收到来自客户端的命令时, 不会立即执行命令, 而是将这些命令全部放进一个事务队列里, 然后返回 QUEUED , 表示命令已入队:

redis> MULTI
redis> SET sayhi "hello redis"
redis> GET sayhi



  1. 要执行的命令(cmd)
  2. 命令的参数(argv)
  3. 参数的个数(argc)


如果客户端正处于事务状态, 那么当 EXEC 命令执行时, 服务器根据客户端所保存的事务队列, 以先进先出(FIFO)的方式执行事务队列中的命令: 最先入队的命令最先执行, 而最后入队的命令最后执行。

当事务队列里的所有命令被执行完之后, EXEC 命令会将回复队列作为自己的执行结果返回给客户端, 客户端从事务状态返回到非事务状态, 至此, 事务执行完毕。

redis> multi
redis> set k1 v1
redis> set k2 v2
redis> set k3 v3
redis> get k2
redis> exec
1) OK
2) OK
3) OK
4) "v2"


无论在事务状态下, 还是在非事务状态下, Redis 命令都由同一个函数执行, 所以它们共享很多服务器的一般设置, 比如 AOF 的配置、RDB 的配置,以及内存限制,等等。


  1. 非事务状态下的命令以单个命令为单位执行,前一个命令和后一个命令的客户端不一定是同一个;


  2. 在非事务状态下,执行命令所得的结果会立即被返回给客户端;

    而事务则是将所有命令的结果集合到回复队列,再作为 EXEC 命令的结果返回给客户端。


Redis 的事务是不可嵌套的, 当客户端已经处于事务状态, 而客户端又再向服务器发送 MULTI 时, 服务器只是简单地向客户端发送一个错误, 然后继续等待其他命令的入队。 MULTI 命令的发送不会造成整个事务失败, 也不会修改事务队列中已有的数据。


  • 事务提供了一种将多个命令打包,然后一次性、有序地执行的机制。
  • 事务在执行过程中不会被中断,所有事务命令执行完之后,事务才能结束。
  • 多个命令会被入队到事务队列中,然后按先进先出(FIFO)的顺序执行。
  • WATCH 命令的事务会将客户端和被监视的键在数据库的 watched_keys 字典中进行关联,当键被修改时,程序会将所有监视被修改键的客户端的 REDIS_DIRTY_CAS 选项打开。
  • 只有在客户端的 REDIS_DIRTY_CAS 选项未被打开时,才能执行事务,否则事务直接返回失败。
  • Redis 的事务保证了 ACID 中的一致性(C)和隔离性(I),但并不保证原子性(A)和持久性(D)

8. Redis的发布订阅



8.1 频道的订阅与发送

Redis 的 SUBSCRIBE 命令可以让客户端订阅任意数量的频道, 每当有新信息发送到被订阅的频道时, 信息就会被发送给所有订阅指定频道的客户端。

下图展示了频道 redisChannel , 以及订阅这个频道的三个客户端 —— client1client2client3 之间的关系模型:

8.2 频道创建及订阅过程

订阅频道(自动创建)名为 redisChannel:

redis> SUBSCRIBE redisChannel
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "redisChannel"
3) (integer) 1

重新开启个 redis 客户端,然后在频道 redisChannel发布消息,订阅者就能接收到消息。

redis> PUBLISH redisChannel "welcome to subscribe redisChannel"
(integer) 1


1) "message"
2) "redisChannel"
3) "welcome to subscribe redisChannel"

redis 发布订阅常用命令:

1PSUBSCRIBE pattern [pattern ...] 订阅一个或多个符合给定模式的频道。
2PUBSUB subcommand [argument [argument ...]] 查看订阅与发布系统状态。
3PUBLISH channel message]将信息发送到指定的频道。
4PUNSUBSCRIBE [pattern [pattern ...]]退订所有给定模式的频道。
5SUBSCRIBE channel [channel ...] 订阅给定的一个或多个频道的信息。
6UNSUBSCRIBE [channel [channel ...]指退订给定的频道。


  • 备机与主机使用slaveof ip 端口建立连接后备机会将主机的所有数据备份到备机
  • 主从复制,读写分离:只有主机可以写,从机只能读,如果从机使用写命令会报错
  • 如果主机挂了,备机依然是备机,不会夺位,除非手动操作
  • 如果主机修复了,备机依然会继续备份无需任何操作
  • 如果某台备机挂了,修复后启动redis会变成master,需要与主机重新连接才可以继续备份slaveof ip 端口
  • 上一个Slave可以是下一个slave的Master,Slave同样可以接收其他slaves的连接和同步请求,那么该slave作为了链条中下一个的master,可以有效减轻master的写压力
  • 中途变更转向:会清除之前的数据,重新建立,拷贝最新的
  • SLAVEOF no one可以使当前数据库停止与其他数据库的同步,转成主数据库






  • 通过发送命令,让Redis服务器返回监控其运行状态,包括主服务器和从服务器。
  • 当哨兵监测到master宕机,会自动将slave切换成master,然后通过发布订阅模式通知其他的从服务器,修改配置文件,让它们切换主机。


故障切换(failover)的过程: 假设主服务器宕机,哨兵1先检测到这个结果,系统并不会马上进行failover过程,仅仅是哨兵1主观的认为主服务器不可用,这个现象成为主观下线。当后面的哨兵也检测到主服务器不可用,并且数量达到一定值时,那么哨兵之间就会进行一次投票,投票的结果由一个哨兵发起,进行failover操作。切换成功后,就会通过发布订阅模式,让各个哨兵把自己监控的从服务器实现切换主机,这个过程称为客观下线。这样对于客户端而言,一切都是透明的。

sentinel monitor <master-name> <ip> <port> <quorum> 
代表sentinel节点需要监控<ip> <port>这个master,<master-name>是master的别名,<quorum> 代表判断master故障至少需要2个sentinel节点同意。

sentinel auth-pass <master-name> <password>

sentinel down-after-milliseconds <master-name> <times>

sentinel parallel-syncs <master-name> <nums>

sentinel failover-timeout <master-name> <times>
1) 选出合适slave;
2) 晋升选出的slave为master;
3) 命令其余slave复制新的master;
4) 等待原master恢复后命令它去复制新的master;

sentinel notification-script <master-name> <script-path>

sentinel client-reconfig-script <master-name> <script-path>






  1. /opt/myredisconf/目录下创建一个sentinel.conf配置如下内容

    sentinel monitor redismaster34 6379 1 
    # 无密码 不需要配置
    # sentinel auth-pass <master-name> <password>
    sentinel down-after-milliseconds redismaster34 5000
    # 心跳检测:每个sentinel节点都要通过定期发送ping命令来判断redis数据节点和其余sentinel节点是否可   # 达,如果超过了down-after-milliseconds配置的时间没有有效的恢复,则判定节点不可达。
    sentinel parallel-syncs redismaster34 2
    # 用来限制在一次故障转移之后,每次向新的master发起复制操作的slave个数。尽管复制操作通常不会阻塞主节    # 点,但是同时向master发起复制,必然会对master所在机器造成网络和磁盘IO开销。
    sentinel failover-timeout redismaster34 30000
    # 故障转移超时时间。作用于故障转移的各个阶段:
    # 1) 选出合适slave;
    # 2) 晋升选出的slave为master;
    # 3) 命令其余slave复制新的master;
    # 4) 等待原master恢复后命令它去复制新的master;
    # sentinel notification-script <master-name> <script-path>
    # 在故障转移期间,当一些警告级别的sentinel事件发生(比如-sdown客观下线,-odown主观下线)时,会触发对应  # 路径下的脚本,并向脚本发送相应的事件参数。
    # sentinel client-reconfig-script <master-name> <script-path>
    # 在故障转移结束后,会触发对应路径下的脚本,并向脚本发送故障转移结果的相关参数。
  2. 设置三台redis主机主从复制


    • 分别在192.168.236.35以及192.168.236.36启动redis,启动客户端,执行如下命令

      slaveof 6379 #执行完成后可以使用info replication检查状态
  3. 启动哨兵

    redis-sentinel /opt/myredisconf/sentinel.conf


    <pre>22354:X 16 Mar 2019 13:54:01.941 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
    22354:X 16 Mar 2019 13:54:01.941 # Redis version=5.0.2, bits=64, commit=00000000, modified=0, pid=22354, just started
    22354:X 16 Mar 2019 13:54:01.941 # Configuration loaded
    22354:X 16 Mar 2019 13:54:01.943 * Increased maximum number of open files to 10032 (it was originally set to 1024).
               _.-``__ &apos;&apos;-._                                             
          _.-``    `.  `_.  &apos;&apos;-._           Redis 5.0.2 (00000000/0) 64 bit
      .-`` .-```.  ```\/    _.,_ &apos;&apos;-._                                   
     (    &apos;      ,       .-`  | `,    )     Running in sentinel mode
     |`-._`-...-` __...-.``-._|&apos;` _.-&apos;|     Port: 26379
     |    `-._   `._    /     _.-&apos;    |     PID: 22354
      `-._    `-._  `-./  _.-&apos;    _.-&apos;                                   
     |`-._`-._    `-.__.-&apos;    _.-&apos;_.-&apos;|                                  
     |    `-._`-._        _.-&apos;_.-&apos;    |         
      `-._    `-._`-.__.-&apos;_.-&apos;    _.-&apos;                                   
     |`-._`-._    `-.__.-&apos;    _.-&apos;_.-&apos;|                                  
     |    `-._`-._        _.-&apos;_.-&apos;    |                                  
      `-._    `-._`-.__.-&apos;_.-&apos;    _.-&apos;                                   
          `-._    `-.__.-&apos;    _.-&apos;                                       
              `-._        _.-&apos;                                           
    22354:X 16 Mar 2019 13:54:01.944 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
    22354:X 16 Mar 2019 13:54:01.963 # Sentinel ID is a11334fcd3f61919fe8f6bb9986fb9cf9c9e7e88
    22354:X 16 Mar 2019 13:54:01.963 # +monitor master redismaster34 6379 quorum 1
    22354:X 16 Mar 2019 13:54:01.965 * +slave slave 6379 @ redismaster34 6379
    22354:X 16 Mar 2019 13:54:01.966 * +slave slave 6379 @ redismaster34 6379</pre>


redis sentinel实现原理





sentinel leader选举





  • 故障转移日志分析
  • 节点运维
  • 高可用读写分离