跳至主要內容

安装Kubernetes

soulballad环境配置CentOSCentOS约 2425 字大约 8 分钟

安装Kubernetes

0 搭建K8s集群

官网https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectlopen in new window

GitHubhttps://github.com/kubernetes/kubeadmopen in new window

课程中:使用kubeadm搭建一个3台机器组成的k8s集群,1台master节点,2台worker节点

如果大家机器配置不够,也可以使用在线的,或者minikube的方式或者1个master和1个worker

配置要求

  • One or more machines running one of:
    • Ubuntu 16.04+
    • Debian 9+
    • CentOS 7【课程中使用】
    • Red Hat Enterprise Linux (RHEL) 7
    • Fedora 25+
    • HypriotOS v1.0.1+
    • Container Linux (tested with 1800.6.0)
  • 2 GB or more of RAM per machine (any less will leave little room for your apps)
  • 2 CPUs or more
  • Full network connectivity between all machines in the cluster (public or private network is fine)
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines. See here for more details.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

1 版本统一

  • Docker 18.09.0
  • kubeadm-1.14.0-0
  • kubelet-1.14.0-0
  • kubectl-1.14.0-0
    • k8s.gcr.io/kube-apiserver:v1.14.0
    • k8s.gcr.io/kube-controller-manager:v1.14.0
    • k8s.gcr.io/kube-scheduler:v1.14.0
    • k8s.gcr.io/kube-proxy:v1.14.0
    • k8s.gcr.io/pause:3.1
    • k8s.gcr.io/etcd:3.3.10
    • k8s.gcr.io/coredns:1.3.1
  • calico:v3.9

2 准备3台centos

大家根据自己的情况来准备centos7的虚拟机。

要保证彼此之间能够ping通,也就是处于同一个网络中,虚拟机的配置要求上面也描述咯。

3 更新并安装依赖

3台机器都需要执行

yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

如果提示没有可用软件包 jq,可按如下操作:

image-20191208150622181

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm
yum install -y jq

4 安装Docker

根据之前学习的Docker方式

在每一台机器上都安装好Docker,版本为18.09.0

  1. 安装必要的依赖
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  1. 设置docker仓库
`添加软件源信息`
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
`更新yum缓存`
sudo yum makecache fast

【需要设置一下阿里云镜像加速器】

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://ty2xkivr.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
  1. 安装docker
yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
  1. 启动docker
    sudo systemctl start docker && sudo systemctl enable docker
    

5 修改hosts文件

  1. master

    # 设置master的hostname,并且修改hosts文件
    sudo hostnamectl set-hostname m
    
    vi /etc/hosts
    192.168.8.51 m
    192.168.8.61 w1
    192.168.8.62 w2
    
  2. 两个worker

    # 设置worker01/02的hostname,并且修改hosts文件
    sudo hostnamectl set-hostname w1
    sudo hostnamectl set-hostname w2
    
    vi /etc/hosts
    192.168.8.51 m
    192.168.8.61 w1
    192.168.8.62 w2
    
  3. 使用ping测试一下

6 系统基础前提配置

01 `关闭防火墙`
	systemctl stop firewalld && systemctl disable firewalld
02 `关闭selinux`
	setenforce 0
	sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
03 `关闭swap`
	swapoff -a
	sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
04 `配置iptables的ACCEPT规则`
	iptables -F && iptables -X && iptables \
    -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
05 `设置系统参数`
# ===================================================================================
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system
#====================================================================================

7 Installing kubeadm, kubelet and kubectl

  1. 配置yum源

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  2. 安装kubeadm&kubelet&kubectl

    yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0
    
  3. docker和k8s设置同一个cgroup

    1. 修改 docker 配置文件
      vi /etc/docker/daemon.json
      添加如下内容:

      "exec-opts": ["native.cgroupdriver=systemd"],
      

      1574950053644

      # 重启docker
      systemctl restart docker
      
    2. kubelet,这边如果发现输出directory not exist,也说明是没问题的,大家继续往下进行即可

      sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      systemctl enable kubelet && systemctl start kubelet
      

8 proxy/pause/scheduler等国内镜像

  1. 查看kubeadm使用的镜像

    kubeadm config images list
    

    可以发现这里都是国外的镜像

    k8s.gcr.io/kube-apiserver:v1.14.0
    k8s.gcr.io/kube-controller-manager:v1.14.0
    k8s.gcr.io/kube-scheduler:v1.14.0
    k8s.gcr.io/kube-proxy:v1.14.0
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    

    1574950179450

  2. 解决国外镜像不能访问的问题

    创建kubeadm.sh脚本,用于拉取镜像/打tag/删除原有镜像

    vi kubeadm.sh
    

    kubeadm.sh 内容如下:

    #!/bin/bash
    
    set -e
    
    KUBE_VERSION=v1.14.0
    KUBE_PAUSE_VERSION=3.1
    ETCD_VERSION=3.3.10
    CORE_DNS_VERSION=1.3.1
    
    GCR_URL=k8s.gcr.io
    ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
    
    images=(kube-proxy:${KUBE_VERSION}
    kube-scheduler:${KUBE_VERSION}
    kube-controller-manager:${KUBE_VERSION}
    kube-apiserver:${KUBE_VERSION}
    pause:${KUBE_PAUSE_VERSION}
    etcd:${ETCD_VERSION}
    coredns:${CORE_DNS_VERSION})
    
    for imageName in ${images[@]} ; do
      docker pull $ALIYUN_URL/$imageName
      docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
      docker rmi $ALIYUN_URL/$imageName
    done
    
  3. 运行脚本和查看镜像

    # 运行脚本
    sh ./kubeadm.sh
    # 查看镜像
    docker images
    
  4. 将这些镜像推送到自己的阿里云仓库【可选,根据自己实际的情况】

    # 登录自己的阿里云仓库
    docker login --username=xxx registry.cn-hangzhou.aliyuncs.com
    密码:******
    vi kubeadm-push-aliyun.sh
    
    #!/bin/bash
    
    set -e
    
    KUBE_VERSION=v1.14.0
    KUBE_PAUSE_VERSION=3.1
    ETCD_VERSION=3.3.10
    CORE_DNS_VERSION=1.3.1
    
    GCR_URL=k8s.gcr.io
    ALIYUN_URL=registry.cn-shenzhen.aliyuncs.com/soulballed
    
    images=(kube-proxy:${KUBE_VERSION}
    kube-scheduler:${KUBE_VERSION}
    kube-controller-manager:${KUBE_VERSION}
    kube-apiserver:${KUBE_VERSION}
    pause:${KUBE_PAUSE_VERSION}
    etcd:${ETCD_VERSION}
    coredns:${CORE_DNS_VERSION})
    
    for imageName in ${images[@]} ; do
      docker tag $GCR_URL/$imageName $ALIYUN_URL/$imageName
      docker push $ALIYUN_URL/$imageName
      docker rmi $ALIYUN_URL/$imageName
    done
    
  5. 运行脚本

    sh ./kubeadm-push-aliyun.sh
    

9 kube init初始化master

官网: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

流程

  1. kube init 流程

    1. 进行一系列检查,以确定这台机器可以部署kubernetes
    2. 生成kubernetes对外提供服务所需要的各种证书可对应目录
      /etc/kubernetes/pki/*
    3. 为其他组件生成访问kube-ApiServer所需的配置文件
      ls /etc/kubernetes/
      admin.conf controller-manager.conf kubelet.conf scheduler.conf
    4. 为 Master组件生成Pod配置文件。
      ls /etc/kubernetes/manifests/*.yaml
      kube-apiserver.yaml
      kube-controller-manager.yaml
      kube-scheduler.yaml
    5. 生成etcd的Pod YAML文件。
      ls /etc/kubernetes/manifests/*.yaml
      kube-apiserver.yaml
      kube-controller-manager.yaml
      kube-scheduler.yaml
      etcd.yaml
    6. 一旦这些 YAML 文件出现在被 kubelet 监视的/etc/kubernetes/manifests/目录下,kubelet就会自动创建这些yaml文件定义的pod,即master组件的容器。master容器启动后,kubeadm会通过检查localhost:6443/healthz这个master组件的健康状态检查URL,等待master组件完全运行起来
    7. 为集群生成一个bootstrap token
    8. 将ca.crt等 Master节点的重要信息,通过ConfigMap的方式保存在etcd中,工后续部署node节点使用
    9. 最后一步是安装默认插件,kubernetes默认kube-proxy和DNS两个插件是必须安装的
  2. 初始化master节点

    官网:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/open in new window

    注意此操作是在主节点上进行

    # 本地有镜像
    kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=172.16.11.128 --pod-network-cidr=10.244.0.0/16
    `【若要重新初始化集群状态:kubeadm reset,然后再进行上述操作】`
    

    记得保存好最后kubeadm join的信息

    # ================================================================================
    Your Kubernetes control-plane has initialized successfully!
    To start using your cluster, you need to run the following as a regular user:
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    	
    You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
         https://kubernetes.io/docs/concepts/cluster-administration/addons/
    	
    Then you can join any number of worker nodes by running the following on each as root:
    	
    kubeadm join 172.16.11.128:6443 --token 2wmfq6.wqdr6h7yf6qi8jhx \
      --discovery-token-ca-cert-hash sha256:1c937057679de26fb044fc352bc05426719f65c85fbebeeb650a6c271b176789
    # ================================================================================
    
  3. 根据日志提示

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    此时 kubectl cluster-info 查看一下是否成功

  4. 查看pod验证一下

    等待一会儿,同时可以发现像etc,controller,scheduler等组件都以pod的方式安装成功了

    注意:coredns没有启动,需要安装网络插件

    kubectl get pods -n kube-system 【查看kube-system的pods】
    kubectl get pods --all-namespaces 【查看所有pods】
    

    #=================================================================================
    NAME                        READY   STATUS    RESTARTS   AGE
    coredns-fb8b8dccf-f7g6g     0/1     Pending   0          7m30s
    coredns-fb8b8dccf-hx765     0/1     Pending   0          7m30s
    etcd-m                      1/1     Running   0          6m30s
    kube-apiserver-m            1/1     Running   0          6m36s
    kube-controller-manager-m   1/1     Running   0          6m42s
    kube-proxy-w9m72            1/1     Running   0          7m30s
    kube-scheduler-m            1/1     Running   0          6m24s
    #=================================================================================
    
  5. 健康检查

    curl -k https://localhost:6443/healthz
    
    #==================================================================================
    [root@master-kubeadm-k8s ~]# curl -k https://localhost:6443/healthz
    ok
    #==================================================================================
    

10 部署calico网络插件

选择网络插件:https://kubernetes.io/docs/concepts/cluster-administration/addons/open in new window

calico网络插件:https://docs.projectcalico.org/v3.9/getting-started/kubernetes/open in new window

calico,同样在master节点上操作

# 选择网络插件
	https://kubernetes.io/docs/concepts/cluster-administration/addons/
# calico网络插件
	https://docs.projectcalico.org/v3.9/getting-started/kubernetes/
# 注意:calico,同样在master节点上操作
01 `可以先手动pull一下` 【可能拉取较慢】
      curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml | grep image 【版本会变化,需要根据实际情况拉取镜像】
# =================================================================================
      image: calico/cni:v3.9.3
      image: calico/pod2daemon-flexvol:v3.9.3
      image: calico/node:v3.9.3
      image: calico/kube-controllers:v3.9.3
# ===================================================================================
      `拉取官方镜像`
      docker pull calico/cni:v3.9.3
      docker pull calico/pod2daemon-flexvol:v3.9.3
      docker pull calico/node:v3.9.3
      docker pull calico/kube-controllers:v3.9.3

      `官方镜像拉取太慢,用Jack老师的阿里云镜像`
      docker pull registry.cn-hangzhou.aliyuncs.com/itcrazy2016/kube-controllers:v3.9.3
      docker pull registry.cn-hangzhou.aliyuncs.com/itcrazy2016/cni:v3.9.3
      docker pull registry.cn-hangzhou.aliyuncs.com/itcrazy2016/pod2daemon-flexvol:v3.9.3
      docker pull registry.cn-hangzhou.aliyuncs.com/itcrazy2016/node:v3.9.3

      `打tag`
      docker tag registry.cn-hangzhou.aliyuncs.com/itcrazy2016/kube-controllers:v3.9.3 calico/kube-controllers:v3.9.3
      docker tag registry.cn-hangzhou.aliyuncs.com/itcrazy2016/cni:v3.9.3 calico/cni:v3.9.3
      docker tag registry.cn-hangzhou.aliyuncs.com/itcrazy2016/pod2daemon-flexvol:v3.9.3 calico/pod2daemon-flexvol:v3.9.3
      docker tag registry.cn-hangzhou.aliyuncs.com/itcrazy2016/node:v3.9.3 calico/node:v3.9.3

      `删除registry.cn-hangzhou.aliyuncs.com/itcrazy2016/格式的镜像` 
      # 注意:打tag不会改变imageId,会删除calico的镜像
      # 慎用这条命令,会把tag后的镜像一起删除
      docker rmi -f $(docker images registry.cn-hangzhou.aliyuncs.com/itcrazy2016/* -aq)

02 `在k8s中安装calico`
      yum install -y wget
      wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml
      kubectl apply -f calico.yaml

03 `确认一下calico是否安装成功`
      kubectl get pods --all-namespaces -w 【实时查看所有的Pods】

11 kube join

记得保存初始化master节点的最后打印信息【注意这边大家要自己的,下面我的只是一个参考】

  1. 在woker01和worker02上执行上述命令

    kubeadm join 172.16.11.128:6443 --token 2wmfq6.wqdr6h7yf6qi8jhx \
        --discovery-token-ca-cert-hash sha256:1c937057679de26fb044fc352bc05426719f65c85fbebeeb650a6c271b176789
    

    可能出现这种错误

    1575173916251

    https://blog.csdn.net/an_zhenwei/article/details/19152739open in new window

    最终成功结果如下所示:

    1575173722465

  2. 在master节点上检查集群信息

    kubectl get nodes
    # 最开始状态可能是NotReady,稍等一会会变成Ready
    
    NAME                   STATUS   ROLES    AGE     VERSION
    master-kubeadm-k8s     Ready    master   19m     v1.14.0
    worker01-kubeadm-k8s   Ready    <none>   3m6s    v1.14.0
    worker02-kubeadm-k8s   Ready    <none>   2m41s   v1.14.0
    

    1575173838201

12 再次体验Pod

  1. 定义 pod.yml 文件,比如 pod_nginx_rs.yaml
    cat > pod_nginx_rs.yaml <<EOF
    apiVersion: apps/v1
    kind: ReplicaSet
    metadata:
      name: nginx
      labels:
    	tier: frontend
    spec:
      replicas: 3
      selector:
    	matchLabels:
    	  tier: frontend
      template:
    	metadata:
    	  name: nginx
    	  labels:
    		tier: frontend
    	spec:
    	  containers:
    	  - name: nginx
    		image: nginx
    		ports:
    		- containerPort: 80
    EOF
    
  2. 根据pod_nginx_rs.yml文件创建pod
    kubectl apply -f pod_nginx_rs.yaml
    
  3. 查看pod
    kubectl get pods
    kubectl get pods -o wide
    kubectl describe pod nginx
    
  4. 感受通过rs将pod扩容
    kubectl scale rs nginx --replicas=5
    kubectl get pods -o wide
    
  5. 删除pod
    kubectl delete -f pod_nginx_rs.yaml
    
上次编辑于:
贡献者: soulballad