centos7安装高可用k8s集群
时间:2022-11-23 17:44:20.651 +0800 CST 浏览:499

一、环境准备

1 主机准备

主机 Ip主机名主功能
10.200.20.116master01etcd,apiserver,controller-manager,scheduler,docker,proxy
10.200.20.117master02etcd,apiserver,controller-manager,scheduler,docker,proxy
10.200.20.118master03etcd,apiserver,controller-manager,scheduler,docker,proxy
10.200.20.119worker01kubelet,docker,proxy
10.200.20.120worker02kubelet,docker,proxy
10.200.20.121VIPVIP

二、安装准备

2 配置主机

2.1 修改主机名

对 5 台机器分别修改,同时修改 hosts 文件

$ hostnamectl set-hostname master01
$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.200.20.116  master01
10.200.20.117  master02
10.200.20.118  master03
10.200.20.119  worker01
10.200.20.120  worker02

2.1.2 修改其它的主机名

$ cat >> /etc/hosts << EOF
10.200.20.116  master01
10.200.20.117  master02
10.200.20.118  master03
10.200.20.119  worker01
10.200.20.120  worker02
EOF
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname worker01
hostnamectl set-hostname worker02

2.2 优化主机

所有主机操作

2.2.1 关闭 swap

$ swapoff -a
$ sed -ri 's/.*swap.*/#&/' /etc/fstab 

2.2.2 关闭防火墙及 selinux

$ systemctl stop firewalld && systemctl disable firewalld
$ sed -i 's/=enforcing/=disabled/g' /etc/selinux/config

2.3 内核参数

本文的 k8s 网络使用 flannel,该网络需要设置内核参数 bridge-nf-call-iptables=1,修改这个参数需要系统有 br_netfilter 模块。

2.3.1 br_netfilter 模块加载

查看 br_netfilter 模块:

$ lsmod |grep br_netfilter

如果系统没有 br_netfilter 模块则执行下面的新增命令,如有则忽略。

临时新增 br_netfilter 模块:

# 该方式重启后会失效
$ modprobe br_netfilter

永久新增 br_netfilter 模块:

$ cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
$ cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
$ chmod 755 /etc/sysconfig/modules/br_netfilter.modules

2.3.2 内核参数修改

永久修改

$ cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
$ modprobe br_netfilter
$ sysctl -w net.bridge.bridge-nf-call-iptables=1
$ sysctl -a | grep net.bridge.bridge-nf-call-iptables
$ sysctl -p /etc/sysctl.d/k8s.conf

2.4 添加 yum 源

在所有机器上都添加以下源

2.4.1 新增 kubernetes 源

$ cd /etc/yum.repos.d/
$ cat <<EOF > kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.4.2 增加 docker 源

$ cd /etc/yum.repos.d/;mv docker-ce.repo docker-ce.repo_bak;wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ cd /etc/yum.repos.d/;wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.5 安装 docker

所有服务器均安装

2.5.1 安装依赖包

$ yum install -y yum-utils   device-mapper-persistent-data   lvm2

2.5.2 安装 docker

查看 docker 版本

$ yum list docker-ce --showduplicates | sort -r

安装 docker

$ yum install docker-ce -y

2.5.3 启动 docker 命令

$ systemctl start docker
$ systemctl enable docker

2.5.4 修改 daemon.json

$ cat <<EOF >/etc/docker/daemon.json 
 {
"registry-mirrors":["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
     "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
     "overlay2.override_kernel_check=true"
   ]
 }
EOF

2.5.5 重新加载 docker

$ systemctl daemon-reload
$ systemctl restart docker

三、keepalived 安装及配置

在三台 master 主机上都需要安装 keepalived

3.1 安装 keepalived

$ yum -y install keepalived

3.2 配置 keepalived

3.2.1 master01 上 keepalived 配置

$ cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state MASTER 
    interface ens192
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.200.20.121
    }
}


3.2.2 master02 上 keepalived 配置

$ more /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master02
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens192
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.200.20.121
    }
}


3.2.3 master03 上 keepalived 配置

$ more /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master03
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens192
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.200.20.121
    }
}


3.2.3 启动 keepalived

$ systemctl start keepalived
$ systemctl enable keepalived

四、安装 k8s 组件

4.1. 安装 kubelet、kubeadm 和 kubectl

所有机器全部安装

kubelet 运行在集群所有节点上,用于启动 Pod 和容器等对象的工具
kubeadm 用于初始化集群,启动集群的命令工具
kubectl 用于和集群通信的命令行,通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
安装版本为最新 1.22.2
也可以根据自己所需要的版本来安装部署

$ yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2

查看版本

$ yum list kubelet --showduplicates | sort -r

安装 yum install -y kubelet kubeadm kubectl

$ yum install -y kubelet  kubeadm  kubectl 

4.22.3 启动 kubelet

启动 kubelet 并设置开机启动

$ systemctl enable kubelet && systemctl start kubelet

五、初始化 k8s

5.1 在 master01 上执行

kubeadm.conf

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.22.2
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - master01
  - master02
  - master03
  - worker01
  - worker02
  - 10.200.20.116
  - 10.200.20.117
  - 10.200.20.118
  - 10.200.20.119
  - 10.200.20.120
  - 10.200.20.121
controlPlaneEndpoint: "10.200.20.121:6443"
networking:
  podSubnet: "10.244.0.0/16"


5.2 初始化 master

$ kubeadm init --config=kubeadm-config.yaml

5.3 执行后如下

$ kubeadm init --config=kubeadm-config.yaml 
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01 master02 master03 worker01 worker02] and IPs [10.96.0.1 10.200.20.116 10.200.20.121 10.200.20.117 10.200.20.118 10.200.20.119 10.200.20.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [10.200.20.116 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [10.200.20.116 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.037482 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mkpnzt.he3sxvnr1igi0xxm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 

5.4 验证

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=/etc/kubernetes/admin.conf
$ kubectl get nodes;
NAME            STATUS     ROLES                  AGE    VERSION
master01   NotReady   control-plane,master   2m4s   v1.22.2

初始化失败,或出现以下错误,可以重新初始化

accepts at most 1 arg(s), received 3
To see the stack trace of this error execute with --v=5 or higher

如果初始化失败,可执行 kubeadm reset 后重新初始化

$ kubeadm reset
$ rm -rf $HOME/.kube/config

5.5 添加其它机器

5.5.1 在其它的 master 添加公钥

$ ssh-keygen -t rsa
$ ssh-copy-id -i master02
$ ssh-copy-id -i master03

记录 kubeadm join 的输出,后面需要这个命令将 work 节点和其他 master 节点加入集群中。
master01 分发证书:
在 master01 上运行脚本 cert-main-master.sh,将证书分发至 master02 和 master03

USER=root 
    CONTROL_PLANE_IPS="10.200.20.117 10.200.20.118"
    for host in ${CONTROL_PLANE_IPS}; do
        ssh ${USER}@${host} "mkdir -p /etc/kubernetes/pki/"
        ssh ${USER}@${host} "mkdir -p /etc/kubernetes/pki/etcd"
        scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/ca.crt
        scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/ca.key
        scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/sa.key
        scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/sa.pub
        scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/front-proxy-ca.crt
        scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/front-proxy-ca.key
        scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt
        # Quote this line if you are using external etcd
        scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key
    done
$ sh cert-main-master.sh 

5.5.2 master02 加入集群

$ kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 \
    --control-plane 

同时执行

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.5.3 master03 加入集群

$ kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 \
    --control-plane 

同时执行

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.5.4 work01 及 work02 加入集群

$ kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 

work01 加入集群

$ kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
>     --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 
[preflight] Running pre-flight checks
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

work02 加入集群

$ kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
>     --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 
[preflight] Running pre-flight checks
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

六、验证 k8s 集群

$ kubectl get nodes;
NAME            STATUS     ROLES                  AGE   VERSION
master01   NotReady   control-plane,master   48m   v1.22.2
master02   NotReady   control-plane,master   18m   v1.22.2
master03   NotReady   control-plane,master   13m   v1.22.2
worker01   NotReady   <none>                 12m   v1.22.2
worker02   NotReady   <none>                 11s   v1.22.2

6.1 错误列举

如果出现 worker 节点名字没改,后面添加有问题, 按以下执行则可以后续添加

$ rm -rf /var/lib/kubelet/*
$ cd /etc/kubernetes
$ ls
kubelet.conf  manifests  pki
$ rm -rf kubelet.conf 
$ rm -rf pki/ca.crt 

6.2 添加网络

CNI 网络插件

$ kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ # kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

6.3 验证网络状态

$ kubectl get nodes;
NAME            STATUS   ROLES                  AGE     VERSION
master01   Ready    control-plane,master   53m     v1.22.2
master02   Ready    control-plane,master   23m     v1.22.2
master03   Ready    control-plane,master   18m     v1.22.2
worker01   Ready    <none>                 17m     v1.22.2
worker02   Ready    <none>                 5m31s   v1.22.2

6.4 查看 pod

$ kubectl get po -o wide -A
NAMESPACE     NAME                                    READY   STATUS    RESTARTS      AGE     IP               NODE            NOMINATED NODE   READINESS GATES
kube-system   coredns-7d89d9b6b8-cqpjv                1/1     Running   0             55m     10.244.3.2       worker01   <none>           <none>
kube-system   coredns-7d89d9b6b8-swpcb                1/1     Running   0             55m     10.244.3.3       worker01   <none>           <none>
kube-system   etcd-master01                      1/1     Running   0             55m     10.200.20.116   master01   <none>           <none>
kube-system   etcd-master02                      1/1     Running   0             25m     10.200.20.117   master02   <none>           <none>
kube-system   etcd-master03                      1/1     Running   0             19m     10.200.20.118   master03   <none>           <none>
kube-system   kube-apiserver-master01            1/1     Running   0             55m     10.200.20.116   master01   <none>           <none>
kube-system   kube-apiserver-master02            1/1     Running   0             25m     10.200.20.117   master02   <none>           <none>
kube-system   kube-apiserver-master03            1/1     Running   0             20m     10.200.20.118   master03   <none>           <none>
kube-system   kube-controller-manager-master01   1/1     Running   1 (24m ago)   55m     10.200.20.116   master01   <none>           <none>
kube-system   kube-controller-manager-master02   1/1     Running   0             25m     10.200.20.117   master02   <none>           <none>
kube-system   kube-controller-manager-master03   1/1     Running   0             20m     10.200.20.118   master03   <none>           <none>
kube-system   kube-flannel-ds-5pjbf                   1/1     Running   0             4m8s    10.200.20.120   worker02   <none>           <none>
kube-system   kube-flannel-ds-bs4t8                   1/1     Running   0             4m8s    10.200.20.119   worker01   <none>           <none>
kube-system   kube-flannel-ds-jn698                   1/1     Running   0             4m8s    10.200.20.117   master02   <none>           <none>
kube-system   kube-flannel-ds-r4ktd                   1/1     Running   0             4m8s    10.200.20.118   master03   <none>           <none>
kube-system   kube-flannel-ds-tckjr                   1/1     Running   0             4m8s    10.200.20.116   master01   <none>           <none>
kube-system   kube-proxy-469lj                        1/1     Running   0             25m     10.200.20.117   master02   <none>           <none>
kube-system   kube-proxy-k47ww                        1/1     Running   0             18m     10.200.20.119   worker01   <none>           <none>
kube-system   kube-proxy-msk5s                        1/1     Running   0             20m     10.200.20.118   master03   <none>           <none>
kube-system   kube-proxy-tjqhc                        1/1     Running   0             6m55s   10.200.20.120   worker02   <none>           <none>
kube-system   kube-proxy-vch97                        1/1     Running   0             55m     10.200.20.116   master01   <none>           <none>
kube-system   kube-scheduler-master01            1/1     Running   1 (24m ago)   55m     10.200.20.116   master01   <none>           <none>
kube-system   kube-scheduler-master02            1/1     Running   0             25m     10.200.20.117   master02   <none>           <none>
kube-system   kube-scheduler-master03            1/1     Running   0             20m     10.200.20.118   master03   <none>           <none>

七、添加新节点

# 创建token
kubeadm token create
# 获取CA证书 sha256 编码 hash 值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
# 加入节点
kubeadm join 10.200.20.121:6443 --token mkpnzt.he3sxvnr1igi0xxm \
    --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 

### 一步到位 ###
# 创建token
kubeadm token create --print-join-command
# 创建永不过期的token
kubeadm token create --print-join-command --ttl=0
### 然后执行打印出来的命令即可 ###

# 查看是否存在有效的 token 值
kubeadm token list


如果这篇文章对你有所帮助,可以通过下边的“打赏”功能进行小额的打赏。

本网站部分内容来源于互联网,如有侵犯版权请来信告知,我们将立即处理。


来说两句吧