Docker学习笔记

佐手、 提交于 2020-08-18 23:29:38

五、Docker

1、简介

Docker是一个开源的应用容器引擎;是一个轻量级容器技术;

Docker支持将软件编译成一个镜像;然后在镜像中各种软件做好配置,将镜像发布出去,其他使用者可以直接使用这个镜像;

运行中的这个镜像称为容器,容器启动是非常快速的。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1hzHsWsw-1592984462082)(E:/springboot核心篇+整合篇-尚硅谷/01尚硅谷SpringBoot核心技术篇/Spring Boot 笔记+课件/images/搜狗截图20180303145450.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LhbSvZKc-1592984462085)(E:/springboot核心篇+整合篇-尚硅谷/01尚硅谷SpringBoot核心技术篇/Spring Boot 笔记+课件/images/搜狗截图20180303145531.png)]

2、核心概念

docker主机(Host):安装了Docker程序的机器(Docker直接安装在操作系统之上);

docker客户端(Client):连接docker主机进行操作;

docker仓库(Registry):用来保存各种打包好的软件镜像;

docker镜像(Images):软件打包好的镜像;放在docker仓库中;

docker容器(Container):镜像启动后的实例称为一个容器;容器是独立运行的一个或一组应用

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2fABJMZM-1592984462088)(E:/springboot核心篇+整合篇-尚硅谷/01尚硅谷SpringBoot核心技术篇/Spring Boot 笔记+课件/images/搜狗截图20180303165113.png)]

使用Docker的步骤:

1)、安装Docker

2)、去Docker仓库找到这个软件对应的镜像;

3)、使用Docker运行这个镜像,这个镜像就会生成一个Docker容器;

4)、对容器的启动停止就是对软件的启动停止;

3、安装Docker

1)、安装linux虚拟机

​ 1)、VMWare、VirtualBox(安装);

​ 2)、安装centos7

​ 3)、双击启动linux虚拟机;使用 root/ 960614登陆

​ 4)、使用客户端连接linux服务器进行命令操作;

​ 7)、查看linux的ip地址

ip addr

​ 8)、使用客户端连接linux;

2)、在linux虚拟机上安装docker

步骤:

1、检查内核版本,必须是3.10及以上
uname -r
2、安装docker
yum install docker
3、输入y确认安装
4、启动docker
[root@localhost ~]# systemctl start docker
[root@localhost ~]# docker -v
Docker version 1.12.6, build 3e8e77d/1.12.6
5、开机启动docker
[root@localhost ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
6、停止docker
systemctl stop docker

4、Docker常用命令&操作

1)、镜像操作

操作 命令 说明
检索 docker search 关键字 eg:docker search redis 我们经常去docker hub上检索镜像的详细信息,如镜像的TAG。
拉取 docker pull 镜像名:tag :tag是可选的,tag表示标签,多为软件的版本,默认是latest
列表 docker images 查看所有本地镜像
删除 docker rmi image-id 删除指定的本地镜像

https://hub.docker.com/

2)、容器操作

软件镜像(QQ安装程序)----运行镜像----产生一个容器(正在运行的软件,运行的QQ);

步骤:

1、搜索镜像
[root@localhost ~]# docker search tomcat
2、拉取镜像
[root@localhost ~]# docker pull tomcat
3、根据镜像启动容器
docker run --name mytomcat -d tomcat:latest
4、docker ps  
查看运行中的容器
5、 停止运行中的容器
docker stop  容器的id
6、查看所有的容器
docker ps -a
7、启动容器
docker start 容器id
8、删除一个容器
 docker rm 容器id
9、启动一个做了端口映射的tomcat
[root@localhost ~]# docker run -d -p 8888:8080 tomcat
-d:后台运行
-p: 将主机的端口映射到容器的一个端口    主机端口:容器内部的端口

10、为了演示简单关闭了linux的防火墙
service firewalld status ;查看防火墙状态
service firewalld stop:关闭防火墙
11、查看容器的日志
docker logs container-name/container-id

更多命令参看
https://docs.docker.com/engine/reference/commandline/docker/
可以参考每一个镜像的文档

3)、安装MySQL示例

docker pull mysql

错误的启动

[root@localhost ~]# docker run --name mysql01 -d mysql
42f09819908bb72dd99ae19e792e0a5d03c48638421fa64cce5f8ba0f40f5846

mysql退出了
[root@localhost ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                           PORTS               NAMES
42f09819908b        mysql               "docker-entrypoint.sh"   34 seconds ago      Exited (1) 33 seconds ago                            mysql01
538bde63e500        tomcat              "catalina.sh run"        About an hour ago   Exited (143) About an hour ago                       compassionate_
goldstine
c4f1ac60b3fc        tomcat              "catalina.sh run"        About an hour ago   Exited (143) About an hour ago                       lonely_fermi
81ec743a5271        tomcat              "catalina.sh run"        About an hour ago   Exited (143) About an hour ago                       sick_ramanujan


//错误日志
[root@localhost ~]# docker logs 42f09819908b
error: database is uninitialized and password option is not specified 
  You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD;这个三个参数必须指定一个

正确的启动

[root@localhost ~]# docker run --name mysql01 -e MYSQL_ROOT_PASSWORD=123456 -d mysql
b874c56bec49fb43024b3805ab51e9097da779f2f572c22c695305dedd684c5f
[root@localhost ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
b874c56bec49        mysql               "docker-entrypoint.sh"   4 seconds ago       Up 3 seconds        3306/tcp            mysql01

做了端口映射

[root@localhost ~]# docker run -p 3306:3306 --name mysql02 -e MYSQL_ROOT_PASSWORD=123456 -d mysql
ad10e4bc5c6a0f61cbad43898de71d366117d120e39db651844c0e73863b9434
[root@localhost ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
ad10e4bc5c6a        mysql               "docker-entrypoint.sh"   4 seconds ago       Up 2 seconds        0.0.0.0:3306->3306/tcp   mysql02

几个其他的高级操作

docker run --name mysql03 -v /conf/mysql:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
把主机的/conf/mysql文件夹挂载到 mysqldocker容器的/etc/mysql/conf.d文件夹里面
改mysql的配置文件就只需要把mysql配置文件放在自定义的文件夹下(/conf/mysql)

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
指定mysql的一些配置参数

六、Docker环境下的前后端分离项目部署与运维

  1. 先更新软件包

    yum -y update
    
  2. 安装Docker虚拟机

    yum install -y docker
    
  3. 运行、重启、关闭Docker虚拟机

    service docker start
    service docker restart
    service docker stop
    
  4. 搜索镜像

    docker search 镜像名称
    
  5. 下载镜像

    docker pull 镜像名称
    
  6. 查看镜像

    docker images
    
  7. 删除镜像

    docker rmi 镜像名称
    
  8. 运行容器

    docker run 启动参数  镜像名称
    
  9. 查看容器列表

    docker ps -a
    
  10. 停止、挂起、恢复容器

docker stop 容器ID
docker pause 容器ID
docker unpase 容器ID
  1. 查看容器信息

    docker inspect 容器ID
    
  2. 删除容器

    docker rm 容器ID
    
  3. 数据卷管理

    docker volume create 数据卷名称  #创建数据卷
    docker volume rm 数据卷名称  #删除数据卷
    docker volume inspect 数据卷名称  #查看数据卷
    
  4. 网络管理

    docker network ls 查看网络信息
    docker network create --subnet=网段 网络名称
    docker network rm 网络名称
    
  5. 避免VM虚拟机挂起恢复之后,Docker虚拟机断网

    vi /etc/sysctl.conf
    

    文件中添加net.ipv4.ip_forward=1这个配置

    #重启网络服务
    systemctl  restart network
    

安装PXC集群,负载均衡,双机热备

  1. 安装PXC镜像

    docker pull percona/percona-xtradb-cluster
    
  2. 为PXC镜像改名

    docker tag percona/percona-xtradb-cluster pxc
    
  3. 创建net1网段

    docker network create --subnet=172.18.0.0/16 net1
    
  4. 创建5个数据卷

    docker volume create --name v1
    docker volume create --name v2
    docker volume create --name v3
    docker volume create --name v4
    docker volume create --name v5
    
  5. 创建备份数据卷(用于热备份数据)

    docker volume create --name backup
    
  6. 创建5节点的PXC集群

    注意,每个MySQL容器创建之后,因为要执行PXC的初始化和加入集群等工作,耐心等待1分钟左右再用客户端连接MySQL。另外,必须第1个MySQL节点启动成功,用MySQL客户端能连接上之后,再去创建其他MySQL节点。

    #创建第1个MySQL节点
    docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -v v1:/var/lib/mysql -v backup:/data --privileged --name=node1 --net=net1 --ip 172.18.0.2 pxc
    #创建第2个MySQL节点
    docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v2:/var/lib/mysql -v backup:/data --privileged --name=node2 --net=net1 --ip 172.18.0.3 pxc
    #创建第3个MySQL节点
    docker run -d -p 3308:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v3:/var/lib/mysql --privileged --name=node3 --net=net1 --ip 172.18.0.4 pxc
    #创建第4个MySQL节点
    docker run -d -p 3309:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v4:/var/lib/mysql --privileged --name=node4 --net=net1 --ip 172.18.0.5 pxc
    #创建第5个MySQL节点
    docker run -d -p 3310:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v5:/var/lib/mysql -v backup:/data --privileged --name=node5 --net=net1 --ip 172.18.0.6 pxc
    
  7. 安装Haproxy镜像

    docker pull haproxy
    
  8. 宿主机上编写Haproxy配置文件

    vi /home/soft/haproxy.cfg
    

    配置文件如下:

    global
    	#工作目录
    	chroot /usr/local/etc/haproxy
    	#日志文件,使用rsyslog服务中local5日志设备(/var/log/local5),等级info
    	log 127.0.0.1 local5 info
    	#守护进程运行
    	daemon
    
    defaults
    	log	global
    	mode	http
    	#日志格式
    	option	httplog
    	#日志中不记录负载均衡的心跳检测记录
    	option	dontlognull
        #连接超时(毫秒)
    	timeout connect 5000
        #客户端超时(毫秒)
    	timeout client  50000
    	#服务器超时(毫秒)
        timeout server  50000
    
    #监控界面
    listen  admin_stats
    	#监控界面的访问的IP和端口
    	bind  0.0.0.0:8888
    	#访问协议
        mode        http
    	#URI相对地址
        stats uri   /dbs
    	#统计报告格式
        stats realm     Global\ statistics
    	#登陆帐户信息
        stats auth  admin:abc123456
    #数据库负载均衡
    listen  proxy-mysql
    	#访问的IP和端口
    	bind  0.0.0.0:3306
        #网络协议
    	mode  tcp
    	#负载均衡算法(轮询算法)
    	#轮询算法:roundrobin
    	#权重算法:static-rr
    	#最少连接算法:leastconn
    	#请求源IP算法:source
        balance  roundrobin
    	#日志格式
        option  tcplog
    	#在MySQL中创建一个没有权限的haproxy用户,密码为空。Haproxy使用这个账户对MySQL数据库心跳检测
        option  mysql-check user haproxy
        server  MySQL_1 172.18.0.2:3306 check weight 1 maxconn 2000
        server  MySQL_2 172.18.0.3:3306 check weight 1 maxconn 2000
    	server  MySQL_3 172.18.0.4:3306 check weight 1 maxconn 2000
    	server  MySQL_4 172.18.0.5:3306 check weight 1 maxconn 2000
    	server  MySQL_5 172.18.0.6:3306 check weight 1 maxconn 2000
    	#使用keepalive检测死链
        option  tcpka
    
  9. 创建两个Haproxy容器

    #创建第1个Haproxy负载均衡服务器
    docker run -it -d -p 4001:8888 -p 4002:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h1 --privileged --net=net1 --ip 172.18.0.7 haproxy
    #进入h1容器,启动Haproxy
    docker exec -it h1 bash
    haproxy -f /usr/local/etc/haproxy/haproxy.cfg
    #创建第2个Haproxy负载均衡服务器
    docker run -it -d -p 4003:8888 -p 4004:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h2 --privileged --net=net1 --ip 172.18.0.8 haproxy
    #进入h2容器,启动Haproxy
    docker exec -it h2 bash
    haproxy -f /usr/local/etc/haproxy/haproxy.cfg
    
  10. Haproxy容器内安装Keepalived,设置虚拟IP

    #进入h1容器
    docker exec -it h1 bash
    #更新软件包
    apt-get update
    #安装VIM
    apt-get install vim
    #安装Keepalived
    apt-get install keepalived
    #编辑Keepalived配置文件(参考下方配置文件)
    vim /etc/keepalived/keepalived.conf
    #启动Keepalived
    service keepalived start
    #宿主机执行ping命令
    ping 172.18.0.201
    

    配置文件内容如下:

    vrrp_instance  VI_1 {
        state  MASTER
        interface  eth0
        virtual_router_id  51
        priority  100
        advert_int  1
        authentication {
            auth_type  PASS
            auth_pass  123456
        }
        virtual_ipaddress {
            172.18.0.201
        }
    }
    
    #进入h2容器
    docker exec -it h2 bash
    #更新软件包
    apt-get update
    #安装VIM
    apt-get install vim
    #安装Keepalived
    apt-get install keepalived
    #编辑Keepalived配置文件
    vim /etc/keepalived/keepalived.conf
    #启动Keepalived
    service keepalived start
    #宿主机执行ping命令
    ping 172.18.0.201
    

    配置文件内容如下:

    vrrp_instance  VI_1 {
        state  MASTER
        interface  eth0
        virtual_router_id  51
        priority  100
        advert_int  1
        authentication {
            auth_type  PASS
            auth_pass  123456
        }
        virtual_ipaddress {
            172.18.0.201
        }
    }
    
  11. 宿主机安装Keepalived,实现双击热备

    #宿主机执行安装Keepalived
    yum -y install keepalived
    #修改Keepalived配置文件
    vi /etc/keepalived/keepalived.conf
    #启动Keepalived
    service keepalived start
    

    Keepalived配置文件如下:

    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
           	192.168.99.150
        }
    }
    
    virtual_server 192.168.99.150 8888 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 172.18.0.201 8888 {
            weight 1
        }
    }
    
    virtual_server 192.168.99.150 3306 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
    
        real_server 172.18.0.201 3306 {
            weight 1
        }
    }
    
  12. 热备份数据

    #进入node1容器
    docker exec -it node1 bash
    #更新软件包
    apt-get update
    #安装热备工具
    apt-get install percona-xtrabackup-24
    #全量热备
    innobackupex --user=root --password=abc123456 /data/backup/full
    
  13. 冷还原数据
    停止其余4个节点,并删除节点

    docker stop node2
    docker stop node3
    docker stop node4
    docker stop node5
    docker rm node2
    docker rm node3
    docker rm node4
    docker rm node5
    

    node1容器中删除MySQL的数据

    #删除数据
    rm -rf /var/lib/mysql/*
    #清空事务
    innobackupex --user=root --password=abc123456 --apply-back /data/backup/full/2018-04-15_05-09-07/
    #还原数据
    innobackupex --user=root --password=abc123456 --copy-back  /data/backup/full/2018-04-15_05-09-07/
    

    重新创建其余4个节点,组件PXC集群

安装Redis,配置RedisCluster集群

  1. 安装Redis镜像

    docker pull yyyyttttwwww/redis
    
  2. 创建net2网段

    docker network create --subnet=172.19.0.0/16 net2
    
  3. 创建6节点Redis容器

    docker run -it -d --name r1 -p 5001:6379 --net=net2 --ip 172.19.0.2 redis bash
    docker run -it -d --name r2 -p 5002:6379 --net=net2 --ip 172.19.0.3 redis bash
    docker run -it -d --name r3 -p 5003:6379 --net=net2 --ip 172.19.0.4 redis bash
    docker run -it -d --name r4 -p 5004:6379 --net=net2 --ip 172.19.0.5 redis bash
    docker run -it -d --name r5 -p 5005:6379 --net=net2 --ip 172.19.0.6 redis bash
    
  4. 启动6节点Redis服务器

    #进入r1节点
    docker exec -it r1 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #进入r2节点
    docker exec -it r2 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #进入r3节点
    docker exec -it r3 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #进入r4节点
    docker exec -it r4 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #进入r5节点
    docker exec -it r5 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #进入r6节点
    docker exec -it r6 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    
  5. 创建Cluster集群

    #在r1节点上执行下面的指令
    cd /usr/redis/src
    mkdir -p ../cluster
    cp redis-trib.rb ../cluster/
    cd ../cluster
    #创建Cluster集群
    ./redis-trib.rb create --replicas 1 172.19.0.2:6379 172.19.0.3:6379 172.19.0.4:6379 172.19.0.5:6379 172.19.0.6:6379 172.19.0.7:6379
    

打包部署后端项目

  1. 进入人人开源后端项目,执行打包(修改配置文件,更改端口,打包三次生成三个JAR文件)

    mvn clean install -Dmaven.test.skip=true
    
  2. 安装Java镜像

    docker pull java
    
  3. 创建3节点Java容器

    #创建数据卷,上传JAR文件
    docker volume create j1
    #启动容器
    docker run -it -d --name j1 -v j1:/home/soft --net=host java
    #进入j1容器
    docker exec -it j1 bash
    #启动Java项目
    nohup java -jar /home/soft/renren-fast.jar
    
    #创建数据卷,上传JAR文件
    docker volume create j2
    #启动容器
    docker run -it -d --name j2 -v j2:/home/soft --net=host java
    #进入j1容器
    docker exec -it j2 bash
    #启动Java项目
    nohup java -jar /home/soft/renren-fast.jar
    
    #创建数据卷,上传JAR文件
    docker volume create j3
    #启动容器
    docker run -it -d --name j3 -v j3:/home/soft --net=host java
    #进入j1容器
    docker exec -it j3 bash
    #启动Java项目
    nohup java -jar /home/soft/renren-fast.jar
    
  4. 安装Nginx镜像

    docker pull nginx
    
  5. 创建Nginx容器,配置负载均衡

    宿主机上/home/n1/nginx.conf配置文件内容如下:

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    
    	upstream tomcat {
    		server 192.168.99.104:6001;
    		server 192.168.99.104:6002;
    		server 192.168.99.104:6003;
    	}
    	server {
            listen       6101;
            server_name  192.168.99.104;
            location / {
                proxy_pass   http://tomcat;
                index  index.html index.htm;
            }
        }
    }
    

    创建第1个Nginx节点

    docker run -it -d --name n1 -v /home/n1/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    
    

    宿主机上/home/n2/nginx.conf配置文件内容如下:

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    
    	upstream tomcat {
    		server 192.168.99.104:6001;
    		server 192.168.99.104:6002;
    		server 192.168.99.104:6003;
    	}
    	server {
            listen       6102;
            server_name  192.168.99.104;
            location / {
                proxy_pass   http://tomcat;
                index  index.html index.htm;
            }
        }
    }
    

    创建第2个Nginx节点

    docker run -it -d --name n2 -v /home/n2/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    
  6. 在Nginx容器安装Keepalived

    #进入n1节点
    docker exec -it n1 bash
    #更新软件包
    apt-get update
    #安装VIM
    apt-get install vim
    #安装Keepalived
    apt-get install keepalived
    #编辑Keepalived配置文件(如下)
    vim /etc/keepalived/keepalived.conf
    #启动Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.151
        }
    }
    virtual_server 192.168.99.151 6201 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6101 {
            weight 1
        }
    }
    
    #进入n1节点
    docker exec -it n2 bash
    #更新软件包
    apt-get update
    #安装VIM
    apt-get install vim
    #安装Keepalived
    apt-get install keepalived
    #编辑Keepalived配置文件(如下)
    vim /etc/keepalived/keepalived.conf
    #启动Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.151
        }
    }
    virtual_server 192.168.99.151 6201 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6102 {
            weight 1
        }
    }
    

打包部署后端项目

  1. 在前端项目路径下执行打包指令

    npm run build
    
  2. build目录的文件拷贝到宿主机的/home/fn1/renren-vue、/home/fn2/renren-vue、/home/fn3/renren-vue的目录下面

  3. 创建3节点的Nginx,部署前端项目

    宿主机/home/fn1/nginx.conf的配置文件

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    
    	server {
    		listen 6501;
    		server_name  192.168.99.104;
    		location  /  {
    			root  /home/fn1/renren-vue;
    			index  index.html;
    		}
    	}
    }
    
    #启动第fn1节点
    docker run -it -d --name fn1 -v /home/fn1/nginx.conf:/etc/nginx/nginx.conf -v /home/fn1/renren-vue:/home/fn1/renren-vue --privileged --net=host nginx
    

    宿主机/home/fn2/nginx.conf的配置文件

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    
    	server {
    		listen 6502;
    		server_name  192.168.99.104;
    		location  /  {
    			root  /home/fn2/renren-vue;
    			index  index.html;
    		}
    	}
    }
    
    #启动第fn2节点
    docker run -it -d --name fn2 -v /home/fn2/nginx.conf:/etc/nginx/nginx.conf -v /home/fn2/renren-vue:/home/fn2/renren-vue --privileged --net=host nginx
    

    宿主机/home/fn3/nginx.conf的配置文件

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    
    	server {
    		listen 6503;
    		server_name  192.168.99.104;
    		location  /  {
    			root  /home/fn3/renren-vue;
    			index  index.html;
    		}
    	}
    }
    

    启动fn3节点

    #启动第fn3节点
    docker run -it -d --name fn3 -v /home/fn3/nginx.conf:/etc/nginx/nginx.conf -v /home/fn3/renren-vue:/home/fn3/renren-vue --privileged --net=host nginx
    
  4. 配置负载均衡

    宿主机/home/ff1/nginx.conf配置文件

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    
    	upstream fn {
    		server 192.168.99.104:6501;
    		server 192.168.99.104:6502;
    		server 192.168.99.104:6503;
    	}
    	server {
            listen       6601;
            server_name  192.168.99.104;
            location / {
                proxy_pass   http://fn;
                index  index.html index.htm;
            }
        }
    }
    
    #启动ff1节点
    docker run -it -d --name ff1 -v /home/ff1/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    

    宿主机/home/ff2/nginx.conf配置文件

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    
    	upstream fn {
    		server 192.168.99.104:6501;
    		server 192.168.99.104:6502;
    		server 192.168.99.104:6503;
    	}
    	server {
            listen       6602;
            server_name  192.168.99.104;
            location / {
                proxy_pass   http://fn;
                index  index.html index.htm;
            }
        }
    }
    
    #启动ff2节点
    docker run -it -d --name ff2 -v /home/ff2/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    
  5. 配置双机热备

    #进入ff1节点
    docker exec -it ff1 bash
    #更新软件包
    apt-get update
    #安装VIM
    apt-get install vim
    #安装Keepalived
    apt-get install keepalived
    #编辑Keepalived配置文件(如下)
    vim /etc/keepalived/keepalived.conf
    #启动Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 52
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.152
        }
    }
    virtual_server 192.168.99.151 6701 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6601 {
            weight 1
        }
    }
    
    #进入ff1节点
    docker exec -it ff2 bash
    #更新软件包
    apt-get update
    #安装VIM
    apt-get install vim
    #安装Keepalived
    apt-get install keepalived
    #编辑Keepalived配置文件(如下)
    vim /etc/keepalived/keepalived.conf
    #启动Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 52
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.152
        }
    }
    virtual_server 192.168.99.151 6701 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6602 {
            weight 1
        }
    }
    
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!