安装


环境

ip:10.25.26.136

架构:单点

版本:redis-3.0.3


一,安装包路径

在/home/bigdata/opt/redis路径下面有个redis-3.2.0.tar.gz的安装包

或者可以官网手动下载

wget http://download.redis.io/releases/redis-3.2.0.tar.gz



二,两种方式安装依赖库,任取其一即可

1,安装依赖库

yum -y install ruby rubygems

关于源的设定,希望使用官方源

以上两项属于redis某些架构下需要使用的依赖库


2,通过gem安装

gem install redis

如果无法联通gem默认源的话按照以下方式进行更改

gem list --local

gem sources --add https://ruby.taobao.org/

gem sources --add http://gems.ruby-china.com/ --remove http://rubygems.org/

gem install redis --version 3.2.1


三,解压安装包并编译安装

cp /home/bigdata/opt/redis redis-3.2.0.tar.gz /usr/local/sbin/

tar xf redis-3.2.0.tar.gz

cd redis-3.2.0

make

make install

cd src/


四,操作脚本放到默认路径

make完成之后会在/usr/local/bin下面形成一些redis的操作指令文件

如下

[​bigdata@localhost​ ~]$ ll /usr/local/bin/

总用量 8568

-rwxr-xr-x. 1 root root 2087539 5月  26 20:18 redis-benchmark

-rwxr-xr-x. 1 root root   25173 5月  26 20:18 redis-check-aof

-rwxr-xr-x. 1 root root   52820 5月  26 20:18 redis-check-dump

-rwxr-xr-x. 1 root root 2211621 5月  26 20:18 redis-cli

lrwxrwxrwx. 1 root root      12 5月  26 20:18 redis-sentinel -> redis-server

-rwxr-xr-x. 1 root root 4341421 5月  26 20:18 redis-server


建议将对应的官方脚本也移过去

cp redis-trib.rb /usr/local/bin/


新增配置文件

mkdir /opt/redis   配置文件和路径存放目录

mkdir  /var/log/redis/   日志文件存放目录


写入配置文件

/opt/redis/redis-6380.conf

具体内容如下

修改配置文件:

以下为需要的线上配置

grep ‘^[^#]’/opt/redis/redis-6380.conf 



配置文件详述

# 注意单位: 当需要配置内存大小时, 可能需要指定像1k,5GB,4M等常见格式
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes

# 默认情况下 redis 不是作为守护进程运行的,如果你想让它在后台运行,你就把它改成 yes。
# 当redis作为守护进程运行的时候,它会写一个 pid 到 /var/run/redis.pid 文件里面。
daemonize no

# 当redis作为守护进程运行的时候,它会把 pid 默认写到 /var/run/redis.pid 文件里面,
# 但是你可以在这里自己制定它的文件位置。
pidfile /var/run/redis-6380.pid

# 监听端口号,默认为 6379,如果你设为 0 ,redis 将不在 socket 上监听任何客户端连接。
port 6380


# TCP 监听的最大容纳数量
#
# 在高并发的环境下,你需要把这个值调高以避免客户端连接缓慢的问题。
# Linux 内核会一声不响的把这个值缩小成 /proc/sys/net/core/somaxconn 对应的值,
# 所以你要修改somaxconn 和 tcp_max_syn_backlog这两个值才能达到你的预期。
tcp-backlog 511

# 默认情况下,redis 在 server 上所有有效的网络接口上监听客户端连接。
# 你如果只想让它在一个网络接口上监听,那你就绑定一个IP或者多个IP。
#
# 示例,多个IP用空格隔开:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1
# 指定 unix socket 的路径。
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755

# 指定在一个 client 空闲多少秒之后关闭连接(0 就是不管它)
timeout 0

# tcp 心跳包。
#
# 如果设置为非零,则在与客户端缺乏通讯的时候使用 SO_KEEPALIVE 发送 tcp acks 给客户端。
# 这个之所有有用,主要由两个原因:
#
# 1) 防止死的 peers
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
# 推荐一个合理的值就是60秒
tcp-keepalive 0

# 定义日志级别。
# 可以是下面的这些值:
# debug (适用于开发或测试阶段)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (适用于生产环境)
# warning (仅仅一些重要的消息被记录)
loglevel notice

# 指定日志文件的位置
logfile ""

# 设置数据库的数目。
# 默认数据库是 DB 0,你可以在每个连接上使用 select <dbid> 命令选择一个不同的数据库,
# 但是 dbid 必须是一个介于 0 到 databases - 1 之间的值
databases 16

# 默认情况下,如果 redis 最后一次的后台保存失败,redis 将停止接受写操作,
# 这样以一种强硬的方式让用户知道数据不能正确的持久化到磁盘,
# 否则就会没人注意到灾难的发生。
#
# 如果后台保存进程重新启动工作了,redis 也将自动的允许写操作。
#
# 然而你要是安装了靠谱的监控,你可能不希望 redis 这样做,那你就改成 no 好了。
stop-writes-on-bgsave-error yes

# 是否在 dump .rdb 数据库的时候使用 LZF 压缩字符串
# 默认都设为 yes
# 如果你希望保存子进程节省点 cpu ,你就设置它为 no ,
# 不过这个数据集可能就会比较大
rdbcompression no

# 是否校验rdb文件
rdbchecksum no

# 设置 dump 的文件位置
dbfilename dump-6380.rdb

# 工作目录
# 例如上面的 dbfilename 只指定了文件名,
# 但是它会写入到这个目录下。这个配置项一定是个目录,而不能是文件名。
dir ./

# 如果 master 需要密码认证,就在这里设置
# masterauth <master-password>
# 内网环境建议不需要密码

# 当一个 slave 与 master 失去联系,或者复制正在进行的时候,
# slave 可能会有两种表现:
#
# 1) 如果为 yes ,slave 仍然会应答客户端请求,但返回的数据可能是过时,
# 或者数据可能是空的在第一次同步的时候
#
# 2) 如果为 no ,在你执行除了 info he salveof 之外的其他命令时,
# slave 都将返回一个 "SYNC with master in progress" 的错误,
slave-serve-stale-data no

# 你可以配置一个 slave 实体是否接受写入操作。
# 通过写入操作来存储一些短暂的数据对于一个 slave 实例来说可能是有用的,
# 因为相对从 master 重新同步数而言,据数据写入到 slave 会更容易被删除。
# 但是如果客户端因为一个错误的配置写入,也可能会导致一些问题。
#
# 从 redis 2.6 版起,默认 slaves 都是只读的。
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
# 注意:只读的 slaves 没有被设计成在 internet 上暴露给不受信任的客户端。
# 它仅仅是一个针对误用实例的一个保护层。
slave-read-only yes


repl-diskless-sync no
repl-diskless-sync-delay 5

#禁止TCP_NODELAY在SYNC之后从插座?

#如果选择“yes”的Redis将使用TCP数据包和更少的带宽更少数量将数据发送到从站。但是这可以增加一个延迟的数据,以显示在从机一侧,高达40毫秒
#Linux内核使用的是默认配置。

#如果选择“no”的数据出现在设备端的延迟时间会减少,但更多的带宽将用于复制。

#默认情况下,我们优化了低延迟,但在非常高的交通条件,或当主机和从机有许多跳之外,yes比较好。
repl-disable-tcp-nodelay no

# 在某些时候,master 不再连接 slaves,backlog 将被释放。
#
# A value of 0 means to never release the backlog.
# 如果设置为 0 ,意味着绝不释放 backlog 。
#
# repl-backlog-ttl 3600

# 当 master 不能正常工作的时候,Redis Sentinel 会从 slaves 中选出一个新的 master,
# 这个值越小,就越会被优先选中,但是如果是 0 , 那是意味着这个 slave 不可能被选中。
#
# 默认优先级为 100。
slave-priority 100

# 如果你设置了这个值,当缓存的数据容量达到这个值, redis 将根据你选择的
# eviction 策略来移除一些 keys。
#
# 如果 redis 不能根据策略移除 keys ,或者是策略被设置为 ‘noeviction’,
# redis 将开始响应错误给命令,如 set,lpush 等等,
# 并继续响应只读的命令,如 get
# 暂时设定为1GB,后续可以增加
maxmemory 1GB

# 最大内存策略,你有 5 个选择。
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# volatile-lru -> 使用 LRU 算法移除包含过期设置的 key 。
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# allkeys-lru -> 根据 LRU 算法移除所有的 key 。
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
# noeviction -> 不让任何 key 过期,只是给写入操作返回一个错误
maxmemory-policy volatile-lru

#默认情况下Redis的异步转储磁盘上的数据集。这个模式是在许多应用中不够好,但一个问题与Redis的过程或停电,可能会导致成写入几分钟丢失(取决于所配置的保存点)。

#追加唯一的文件是另一种持久性模式,提供更好的耐用性。比如,使用默认的数据FSYNC策略(参见后面的配置文件)的Redis可以在一个戏剧性的事件,例如服务器电源断电,或单次写入,如果出了故障,Redis的过程本身发生,但失去写入只需一秒操作系统仍在正常运行。

#AOF和RDB持久性可以在同一时间没有问题被启用。
#如果AOF启用在启动时的Redis将加载AOF,这是更好的耐用性担保文件。
appendonly yes

# The name of the append only file (default: "appendonly.aof")
appendfilename "/opt/redis/appendonly-6380.aof"

#在FSYNC()调用告诉操作系统,实际写入磁盘上的数据,而不是等待输出缓冲区中更多的数据。有些操作系统会在磁盘上真的刷新数据,一些其他的操作系统将只是尝试尽快做到这一点。

#Redis的支持三种不同的模式:

#不:不FSYNC,只是让操作系统刷新数据时就是了。快点。
#总是:FSYNC后,每写追加只记录。慢,最安全的。
#everysec:FSYNC只有一次每隔一秒。妥协。

#默认为“everysec”,因为这通常是速度和数据安全之间取得适当的平衡。它是由你来了解,如果你能放宽到“无”,可以让操作系统刷新输出缓冲区,当它想,为了更好的表现(但如果你能忍受一些数据丢失的想法考虑默认的持久化模式这就是快照),或相反,用“永远”这是非常缓慢的,但比everysec多了一点安全。

#详细信息请查看下面的文章:
#http://antirez.com/post/redis-persistence-demystified.html

#如果不确定,请使用“everysec”。
appendfsync everysec

#当AOF FSYNC策略设置为always或everysec,以及后台保存过程(后台保存或AOF日志背景重写)正在执行了大量的I / O写入,在某些Linux配置Redis的可能会阻止很久对于FSYNC()调用。需要注意的是,目前这个没有修好,因为即使执行FSYNC在不同的线程会阻塞我们的同步写入调用。

#为了减缓这个问题有可能使用下列选项,当BGSAVE或BGREWRITEAOF正在进行中时将防止的fsync()被调用

#这意味着,当一个子进程写入时,Redis的耐用性和“appendfsync none”是一样的。在实际上,这意味着,有可能失去高达日志30秒,在最坏的情况下(与默认的Linux设置)。

#如果您有延迟问题把这个为“yes”。否则,把它作为“不”,从耐久性方面来说也就是最安全的。
no-appendfsync-on-rewrite no

# 自动重写 append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
# 当AOFlog大小达到一个指定的百分比时Redis可以自动通过调用BGREWRITEAOF重写日志文件
#
# 工作原理:Redis会记住AOF文件的最后一次写入(如果启动后没有写入过的话,就用启动时的大小)
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
# 将基础大小与现有大小进行对比,如果现有大小大于指定的百分比,会触发重写。
# 当然,需要指定被改写的AOF文件的最小大小,可以避免即使达到指定百分比但文件依然很小的情况
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
# 指定为0的话禁用重写功能
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

#redis在启动时可以加载被截断的AOF文件,而不需要先执行 redis-check-aof 工具,该功能可以在 redis.conf 中配置
aof-load-truncated yes

# Max execution time of a Lua script in milliseconds.
# Lua脚本的最大执行时间,单位是毫秒
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
# 如果已经达到最大执行时间,redis会记录并且返回错误给查询
# When a long running script exceed the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write commands was
# already issue by the script but the user don't want to wait for the natural
# termination of the script.
# 两种命令结束长时间的Lua脚本 :SCRIPT KILL and SHUTDOWN NOSAVE
# SCRIPT KILL用于停止那些尚没有写入指令的脚本
# SHUTDOWN NOSAVE 脚本已经写入但用户不希望等那么久,用这个
# 设置0或者负数则没有限制和警告
lua-time-limit 5000

# 启用或停用集群,yes为启动,no则作为单独实例启动
cluster-enabled no


# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
# 慢日志记录,1000000毫秒等于1秒,负数则禁用,0则记录每一条记录
# 超过10000微妙的记录下来
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
# 慢日志长度:无限制,但过长会大量占用内存,可以通过SLOWLOG RESET进行回收,当有新的慢日志出现并达到最大长度时,去掉最老的一个来释放空间
slowlog-max-len 128

# 延迟监控器
# redis延迟监控子系统在运行时,会抽样检测可能导致延迟的不同操作
# 通过LATENCY命令可以打印相关信息和报告, 命令如下(摘自源文件注释):
# LATENCY SAMPLES: return time-latency samples for the specified event.
# LATENCY LATEST: return the latest latency for all the events classes.
# LATENCY DOCTOR: returns an human readable analysis of instance latency.
# LATENCY GRAPH: provide an ASCII graph of the latency of the specified event.
#
# 系统只记录超过设定值的操作,单位是毫秒,0表示禁用该功能
# 可以通过命令“CONFIG SET latency-monitor-threshold <milliseconds>” 直接设置而不需要重启redis
latency-monitor-threshold 0

# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/keyspace-events
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# by zero or multiple characters. The empty string means that notifications
# are disabled at all.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
# 对于哈希数据的存储方式,redis默认为ziplist,ziplist对于文件存储空间要求较小
# 内容不大时,性能和hashtable没有区别,但hash的条目个数或者大小超量时会重构成hashtable
# hash-max-ziplist-entires 这个参数指的是ziplist中允许存储的最大条目个数,,默认为512,建议为128
# hash-max-ziplist-value ziplist中允许条目value值最大字节数,默认为64,建议为1024
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
# 类似于hash,list数据也使用ziplist编码存储,这种特殊方式仅限于以下的条数和大小
list-max-ziplist-entries 512
list-max-ziplist-value 64

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happens to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
# Sets在一种特殊情况下使用特定的编码存储
# 在以下参数的大小限制内会使用这种编码
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
# 类似于hash,zset使用特殊编码的字节长度以及大小限制 entires为最大条数,value为最大字节数
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
# HyperLogLog稀疏表示限制设置,如果其值大于16000,则仍然采用稠密表示,因为这时稠密表示更能有效使用内存
# 建议值为3000
hll-sparse-max-bytes 3000

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
# 客户输出缓冲,可以强行断开因为某些原因导致从服务器读取数据缓慢的用户
# The limit can be set differently for the three different classes of clients:
# 分为以下三个类别
# normal -> normal clients
# slave -> slave clients and MONITOR clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
# 缓冲设置指令如下
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
# 举例:client-output-buffer-limit <class> 32mg 16mb 10
# 如果客户端到达32则立刻断开连接,16的话连续10s也会断开,0为禁用
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
# 默认是normal的无限制,因为不会在非请求状态下接收数据
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
# pubsub和slave模式都是默认限制
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 128mb 32mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
# Reids调用类似于客户端连接超时,清除一直未被请求的过期密钥等等
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform accordingly to the specified "hz" value.
# 并非所有任务都会频率相同,不过会指定在hz值
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
# 默认为10,提高会增加CPU的使用以及Redis的资源闲置,不过处理能力会上升
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
# 范围在1-500,一般别超过100,只有当系统需要非常短的延迟时才提高
hz 10

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
# 当一个子进程重写AOF文件时,如果开启这项配置会以fsync的形式每32MB进行一次提交
# 可以增量提交数据避免较大的延迟
aof-rewrite-incremental-fsync yes




五,启动

配置完成之后启动redis

cd /opt/redis/

redis-server redis-6380.conf >> /var/log/redis/redis-6380.log 2>&1 &




FAQ

系统参数优化,可根据实际情况进行调整


系统一般默认是128,会导致TCP连接数达不到redis的实际情况

echo 1000 > /proc/sys/net/core/somaxconn   #TCP 监听的最大容纳数量

Redis主要占用资源是内存,调整为1可保证内存最大化使用,但使用到swap时会降低一定的性能

vm.overcommit_memory = 1  #内存使用策略