我把k8s集群master和node的hostname全部修改后发现kube ndoes 还是原来的样子
1
2
3
4
5
6[root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-m1 Ready master 13h v1.15.1 k8s-node-n1 Ready <none> 13h v1.15.1 k8s-node-n2 Ready <none> 13h v1.15.1
当前的hostname已经经过下面的修改
k8s-master-m1->k8s-master1
k8s-node-n1->k8s-node1
k8s-node-n2->k8s-node2
怎么修改kube nodes的hostname呢?
1 台 master 加入集群后发现忘了修改主机名,而在 k8s 集群中修改节点主机名非常麻烦,不如将 master 退出集群改名并重新加入集群
接下来我试着先把nodes删除
1
2
3
4
5
6
7
8
9[root@k8s-master1 ~]# kubectl delete node k8s-node-n1 node "k8s-node-n1" deleted [root@k8s-master1 ~]# kubectl delete node k8s-node-n2 node "k8s-node-n2" deleted [root@k8s-master1 ~]# kubectl delete node k8s-master-m1 node "k8s-master-m1" deleted [root@k8s-master1 ~]# kubectl get nodes No resources found.
全部删除了
查看csr
1
2
3[root@k8s-master1 ~]# kubectl get csr The connection to the server 192.168.32.29:6443 was refused - did you specify the right host or port?
开始提示host不正确了
执行kubeadm reset命令清除集群所有的配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26[root@k8s-master1 ~]# kubeadm reset [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks [reset] Removing info for node "k8s-master1" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace W0524 13:31:14.677131 127888 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints .Please manually remove this etcd member using etcdctl [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
提示不能清除kubeconfig files, 需要我手动去删除
直接手动删除
rm -rf $HOME/.kube/config
再重新reset,还是会遇到以下提示
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24[root@k8s-master1 ~]# kubeadm reset [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0524 13:41:55.276661 128746 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
reset iptables, 再reset kubeconfig, 还是没有解决问题
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25[root@k8s-master1 ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X [root@k8s-master1 ~]# kubeadm reset [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0524 13:45:44.825195 128954 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
尝试按照提示先clear system’s IPVS tables,再reset集群配置,还是失败了
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25[root@k8s-master1 ~]# ipvsadm --clear [root@k8s-master1 ~]# kubeadm reset [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0524 13:50:29.415380 129229 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
只好放大招了!!!
修改kubeadm-config.yaml里面的nodeRegistration的name为master的新hostname先
1
2
3
4
5
6
7nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master
然后执行如下命令,重新初始化主节点和部署
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
然后再执行以下命令,发现master节点出来了,并且用上了新的hostname
1
2
3
4
5
6
7[root@k8s-master1 ~]# mkdir -p $HOME/.kube [root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config [root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady master 53s v1.15.1
重新部署kube flannel网络插件 — 只需要在主节点执行, master节点就ready了
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[root@k8s-master1 ~]# kubectl create -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created [root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready master 8m15s v1.15.1
部署完毕再查询pod,发现一切运行良好
1
2
3
4
5
6
7
8
9
10
11[root@k8s-master1 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-jbhww 1/1 Running 0 11m coredns-5c98db65d4-jdqtn 1/1 Running 0 11m etcd-k8s-master1 1/1 Running 0 10m kube-apiserver-k8s-master1 1/1 Running 0 10m kube-controller-manager-k8s-master1 1/1 Running 0 10m kube-flannel-ds-amd64-j4bp2 1/1 Running 0 4m4s kube-proxy-svb9k 1/1 Running 0 11m kube-scheduler-k8s-master1 1/1 Running 0 10m
用node节点join新的 master
1
2
3
4
5
6
7
8
9
10[root@k8s-node2 ~]# kubeadm join 192.168.32.29:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:799a1d11efdd0c092b8e3226e8d1c58f0ecaf3830e7cf587a13b0c4251fa7343 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.9. Latest validated version: 18.09 error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists [ERROR Port-10250]: Port 10250 is in use [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
出错了,因为之前已经已经join过,并且留下了config files,接下来删除旧的config files,并kill掉占用端口的进程
1
2
3
4
5[root@k8s-node2 ~]# netstat -lnp|grep 10250 tcp6 0 0 :::10250 :::* LISTEN 8674/kubelet [root@k8s-node2 ~]# kill -9 8674 [root@k8s-node2 ~]# rm -rf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf etc/kubernetes/pki/ca.crt
执行kubeadm reset命令清除当前节点所有的配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24[root@k8s-node2 ~]# kubeadm reset [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0524 15:40:46.463906 97562 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
这里又报了跟master一样的错,直接清除$HOME/.kube/config file
rm -rf $HOME/.kube/config
然后重新join master
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17[root@k8s-node2 ~]# kubeadm join 192.168.32.29:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:799a1d11efdd0c092b8e3226e8d1c58f0ecaf3830e7cf587a13b0c4251fa7343 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.9. Latest validated version: 18.09 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
到这里node重新join master已经完成了,其他node的操作同理。
全部完成后,我们重新回到master看看集群的状态
1
2
3
4
5
6[root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready master 114m v1.15.1 k8s-node1 Ready <none> 10m v1.15.1 k8s-node2 Ready <none> 6m54s v1.15.1
再看csr
1
2
3
4
5[root@k8s-master1 ~]# kubectl get csr NAME AGE REQUESTOR CONDITION csr-l7dnl 39m system:bootstrap:abcdef Approved,Issued csr-m5bwx 42m system:bootstrap:abcdef Approved,Issued
至此,更改所有hostname的踩坑之旅完成了。
最后
以上就是寒冷香氛最近收集整理的关于k8s集群修改节点和master的hostname之后需要如何调整(踩坑之旅)的全部内容,更多相关k8s集群修改节点和master内容请搜索靠谱客的其他文章。
发表评论 取消回复