跳至主要內容

安装Nginx

soulballad环境配置CentOSCentOS约 2542 字大约 8 分钟

安装Nginx

安装Nginx

下载安装

这⾥使⽤的是 1.17.10. 版: nginx-1.17.10.tar.gz ,放到对应⽬录下

# 解压安装包
tar -zxvf nginx-1.17.0.tar.gz -C /usr/local
# 安装依赖
yum -y install pcre-devel openssl openssl-devel
# 进入目录
cd /usr/local/nginx-1.17.0
# 编译安装
./configure
make && make install

启动停止

# 启动服务
/usr/local/nginx/sbin/nginx
# 停止服务
/usr/local/nginx/sbin/nginx -s stop
# 重新加载配置
/usr/local/nginx/sbin/nginx -s reload

注意其配置⽂件位于:

/usr/local/nginx/conf/nginx.conf

Nginx主备模式

使用一个vip地址,前端使用2台机器,一台做主,一台做备,但同时只有一台机器工作,另一台备份机器在主机器不出现故障的时候,永远处于浪费状态,对于服务器不多的网站,该方案不经济实惠。

序号域名作用服务器IP角色Nginx版本keepalived
1cdh1nginx1172.18.11.111Masteropenresty/1.21.4.1keepalived-2.1.5
2cdh2nginx2172.18.11.112Backupopenresty/1.21.4.1keepalived-2.1.5
3VIP172.18.11.110

Master-Keepalived

global_defs {   
   router_id cdh1
}
vrrp_script check_nginx_alive {
    script "/usr/local/script/check_nginx_alive.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface enp0s8
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.110
    }
    track_script {
        check_nginx_alive
    }
}

Backup-Keepalived

在备用nginx服务器172.16.2.51上,配置一样,就三点不同,一点必须相同:

  1. router_id 不同;
  2. state BACKUP 不同;
  3. priority 不同;
  4. virtual_router_id 必相同。 配置如下:
global_defs {   
   router_id cdh2
}
vrrp_script check_nginx_alive {
    script "/usr/local/script/check_nginx_alive.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface enp0s8
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.110
    }
    track_script {
        check_nginx_alive
    }
}

check_nginx_alive

于待会测试,所以第一个简单脚本就可以了,只要判断nginx进程没有数值,则停止keepalived服务。测试脚本如下:

#! /bin/bash
pidof nginx
if [ $? -ne 0 ];then
    killall keepalived
fi

如果测试完,可以加个尝试启动nginx,如果尝试失败两次就停止keepalived服务。脚本如下:

#!/bin/bash
counter=$(ps -C nginx --no-heading|wc -l)
if [ "${counter}" = "0" ]; then
    /usr/local/openresty/nginx/sbin/nginx
    sleep 2
    counter=$(ps -C nginx --no-heading|wc -l)
    if [ "${counter}" = "0" ]; then
        killall keepalived 
    fi
fi

添加运行权限

chmod +x /usr/local/script/check_nginx_alive.sh

Nginx配置

nginx服务器中的nginx配置:只需要把 server_name 改成VIP的IP即可,其他无需更改,负载均衡时也只要访问这个VIP地址即可。

修改 /usr/local/openresty/nginx/html/index.html 中内容,添加一行记录区分当前服务器 <p>当前Nginx服务器IP: 172.18.11.112</p>

server {
    listen       80;
    server_name  172.18.11.110;
    location / {
        root   html;
        index  index.html index.htm;
    }   
}   

测试验证

依次在两台nginx服务器启动,并重启 keepalived

systemctl restart nginx.service
syetemctl restart keepalived

分别在nginx主备两台用ip addr查看IP地址:主nginx1的网卡此时已经自动获取VIP,备nginx2则没有处于空闲状态
在浏览器访问VIP:172.18.11.110,也正常显示: 当前Nginx服务器IP: 172.18.11.111
通过命令 nginx -s stop 关闭nginx1,发现显示内容变成: 当前Nginx服务器IP: 172.18.11.112
最后在主nginx1上启动nginx和keepalived后,nginx1重新抢回VIP,一切恢复正常。备nginx2恢复空闲状态。可自行测试查看结果。

Nginx双主模式

使用两个vip地址,前端使用2台机器,互为主备,同时有两台机器工作,当其中一台机器出现故障,两台机器的请求转移到一台机器负担,非常适合于当前架构环境

序号域名作用服务器IP角色Nginx版本keepalived
1cdh1nginx1172.18.11.111Master/Backupopenresty/1.21.4.1keepalived-2.1.5
2cdh2nginx2172.18.11.112Master/Backupopenresty/1.21.4.1keepalived-2.1.5
3VIP1172.18.11.110
4VIP2172.18.11.120

当了解主备模式后,双主模式就容易配置多了。只需要在每台keepalived配置文件,加上一个vrrp_instance命名vrrp_instance VI_2即可,更改几个参数,设置另一个 VIP:172.18.11.120

nginx1:state BACKUP ,priority 100, virtual_router_id 52
nginx2:state MASTER ,priority 150, virtual_router_id 52

Nginx1-Keepalived

global_defs {   
   router_id cdh1
}
vrrp_script check_nginx_alive {
    script "/usr/local/script/check_nginx_alive.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface enp0s8
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.110
    }
    track_script {
        check_nginx_alive
    }
}   
vrrp_instance VI_2 {
    state BACKUP
    interface enp0s8
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.120
    }
    track_script {
        check_nginx_alive
    }  
}

Nginx2-Keepalived

global_defs {   
   router_id cdh2
}
vrrp_script check_nginx_alive {
    script "/usr/local/script/check_nginx_alive.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface enp0s8
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.110
    }
    track_script {
        check_nginx_alive
    }
 }
  
vrrp_instance VI_2 {
    state MASTER
    interface enp0s8
    virtual_router_id 52
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.120
    }
    track_script {
        check_nginx_alive
    }  
}

Nginx配置

同样,在nginx做负载均衡时,需要在nginx的配置文件中,server_name加上这个 172.18.11.120 这个VIP2的地址,配置如下:

server {
    listen       80;
    server_name  172.18.11.110;
    server_name  172.18.11.120;
    location / {
        root   html;
        index  index.html index.htm;
    }   
}   

测试验证

依次启动两台机器上的服务

systemctl restart nginx.service
syetemctl restart keepalived

在浏览器分别访问 172.18.11.110 和 172.18.11.120 都可以负载均衡至nginx1和nginx2。

双主模式配置完成!!

Lvs+Keepalived高可用

keepalived实现虚拟IP(VIP),client访问VIP,连接到keepalived-master,keepliaved-master服务器通过LVS轮询机制将请求负载到nginx-1和nginx-2。

主机域名IP作用版本
1cdh1172.18.11.111Nginx1openresty/1.21.4.1
2cdh2172.18.11.112Nginx2openresty/1.21.4.1
3cdh1172.18.11.111Masterkeepalived-2.1.5
4cdh2172.18.11.112Backupkeepalived-2.1.5
5172.18.11.110虚拟IP(VIP)

lo网卡配置脚本

此环境配置LVS DR 模式,需要在后端nginx服务器lo网卡配置LVS的VIP

vim /etc/init.d/realserver

#虚拟的vip 改为自己环境的vip
SNS_VIP=172.18.11.110
/etc/rc.d/init.d/functions
case "$1" in        #case语句 $1传递给该shell脚本的第一个参数
    start)
        ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP #设置 lo:0 VIP netmask 及广播
        /sbin/route add -host $SNS_VIP dev lo:0     #route del 增加本地路由
        echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
        echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
        echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
        sysctl -p >/dev/null 2>&1
        echo "RealServer Start OK"
        ;;
    stop)
        ifconfig lo:0 down
        route del $SNS_VIP >/dev/null 2>&1          #route del 删除本地路由
        echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
        echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
        echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
        echo "RealServer Stoped"
        ;;
    *)
    echo "Usage: $0 {start|stop}"       #$0 是脚本本身的名字
    exit 1                              #表示进程正常退出
esac                                    #case结束
exit 0                                  #表示进程非正常退出

在 nginx1 和 nginx2 分别执行该脚本

chmod 755 /etc/init.d/realserver
chmod 755 /etc/rc.d/init.d/functions
service realserver start

Master-Keepalived

在 Master 配置

! Configuration File for keepalived

global_defs {
   router_id NG1 # 设置lvs的id,在一个网络内应该是唯一的
}

vrrp_instance VI_1 {
    state MASTER #指定Keepalived的角色,MASTER为主,BACKUP为备 一定大写
    interface enp0s8 #服务器网卡名
    virtual_router_id 51  #虚拟路由编号,主备要一致
    priority 100 #定义优先级,数字越大,优先级越高,主必须大于备
    advert_int 1 #检查间隔,默认为1s
    authentication { #这里配置的密码最多为8位,主备要一致,否则无法正常通讯
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.110 #定义虚拟IP(VIP)为192.168.100.202,可多设,每行一个
    }
}

virtual_server 172.18.11.110 80 {
    delay_loop 6 # 设置健康检查时间,单位是秒
    lb_algo rr #rr代表正常轮询模式,正式环境建议使用wlc加权最小连接数轮询
    lb_kind DR #DR模式:返回请求时,不走LVS,直接返回到客户端。
    nat_mask 255.255.255.0
    persistence_timeout 0
    protocol TCP

    real_server 172.18.11.111 80 { #后端服务器地址
        weight 1  # 配置节点权值,数字越大权重越高
        TCP_CHECK {
            retry 3
            connect_port 80
            connect_timeout 10
            delay_before_retry 3
        }
    }
    real_server 172.18.11.112 80 {
        weight 1
        TCP_CHECK {
            retry 3
            connect_port 80
            connect_timeout 10
            delay_before_retry 3
        }
    }
}

Backup-Keepalived

在 Backup 配置

! Configuration File for keepalived

global_defs {
   router_id NG2
}

vrrp_instance VI_1 {
    state BACKUP  #此处改为BACKUP,一定要大写**
    interface enp0s8
    virtual_router_id 51
    priority 90  #此处改为90,要比master小**
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.11.110
    }
}

virtual_server 172.18.11.110 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.255.0
    persistence_timeout 50
    protocol TCP

    real_server 172.18.11.111 80 {
        weight 1
        TCP_CHECK {
            retry 3
            connect_port 80
            connect_timeout 10
            delay_before_retry 3
        }
    }
    real_server 172.18.11.112 80 {
        weight 1
        TCP_CHECK {
            retry 3
            connect_port 80
            connect_timeout 10
            delay_before_retry 3
        }
    }
}

测试验证

# 查看当前配置的虚拟服务和各个RS的权重
ipvsadm -Ln
# 查看当前ipvs模块中记录的连接
ipvsadm -Lnc

通过访问 172.18.11.110,可以正常显示

  • 停用主Keepalived

    systemctl stop keepalived.service
    

    停用前,主keepalived的VIP还在 ip a show dev enp0s8

    [root@cdh1 script]# ip a show dev enp0s8
    3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    	link/ether 08:00:27:29:90:d5 brd ff:ff:ff:ff:ff:ff
    	inet 172.18.11.111/24 brd 172.18.11.255 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet 172.18.11.110/32 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet6 fe80::a00:27ff:fe29:90d5/64 scope link
    		valid_lft forever preferred_lft forever
    [root@cdh1 script]# systemctl stop keepalived
    [root@cdh1 script]# ip a show dev enp0s8
    3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    	link/ether 08:00:27:29:90:d5 brd ff:ff:ff:ff:ff:ff
    	inet 172.18.11.111/24 brd 172.18.11.255 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet6 fe80::a00:27ff:fe29:90d5/64 scope link
    		valid_lft forever preferred_lft forever
    [root@cdh1 script]#
    

    停用后,VIP出现在从backup上 ip a show dev enp0s8

    [root@cdh2 script]# ip a show dev enp0s8
    3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    	link/ether 08:00:27:51:d4:37 brd ff:ff:ff:ff:ff:ff
    	inet 172.18.11.112/24 brd 172.18.11.255 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet6 fe80::a00:27ff:fe51:d437/64 scope link
    		valid_lft forever preferred_lft forever
    [root@cdh2 script]# ip a show dev enp0s8
    3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    	link/ether 08:00:27:51:d4:37 brd ff:ff:ff:ff:ff:ff
    	inet 172.18.11.112/24 brd 172.18.11.255 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet 172.18.11.110/32 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet6 fe80::a00:27ff:fe51:d437/64 scope link
    		valid_lft forever preferred_lft forever
    [root@cdh2 script]#
    

    此时通过172.18.11.110可以正常访问,当把主Keepalived重启启动后,VIP重新回到主Keepalived

    [root@cdh1 script]# ps -ef|grep keepalived
    root     23345  4493  0 21:34 pts/0    00:00:00 grep --color=auto keepalived
    [root@cdh1 script]# systemctl start keepalived
    [root@cdh1 script]# ip a show dev enp0s8
    3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    	link/ether 08:00:27:29:90:d5 brd ff:ff:ff:ff:ff:ff
    	inet 172.18.11.111/24 brd 172.18.11.255 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet 172.18.11.110/32 scope global enp0s8
    		valid_lft forever preferred_lft forever
    	inet6 fe80::a00:27ff:fe29:90d5/64 scope link
    		valid_lft forever preferred_lft forever
    [root@cdh1 script]#
    
  • 测试停用nginx1
    停用前,两台nginx存在 ipvsadm -l

    [root@cdh1 script]# ipvsadm -l
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  172.18.11.110:http rr
      -> cdh1:http                    Route   1      0          0
      -> cdh2:http                    Route   1      2          0
    

    停用nginx1,发现只有一个172.18.11.112的ip,挂掉的服务已经被移除,此时再去访问只能访问172.18.11.112的nginx

    [root@cdh1 script]# ipvsadm -l
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  172.18.11.110:http rr
      -> cdh1:http                    Route   1      0          0
      -> cdh2:http                    Route   1      2          0
    [root@cdh1 script]# nginx -s stop
    [root@cdh1 script]# ipvsadm -l
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  172.18.11.110:http rr
      -> cdh2:http                    Route   1      2          0
    [root@cdh1 script]#
    

参考:

上次编辑于:
贡献者: soulballad