我是靠谱客的博主 结实金鱼,这篇文章主要介绍k8s安装集群版mysql以及遇到的坑,现在分享给大家,希望可以做个参考。

参考:
https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
https://github.com/kubernetes-retired/external-storage/tree/master/nfs
https://www.jianshu.com/p/65ed4bdf0e89
https://www.cnblogs.com/panwenbin-logs/p/12196286.html

遇到的错误

1,3 pod has unbound immediate PersistentVolumeClaims.

根据官网安装mysql,需要PersistentVolumeClaim,不然会报3 pod has unbound immediate PersistentVolumeClaims:

2,mysql Back-off restarting failed container

这个原因有很多,比如k8s gcr.io被墙导致ImagePullBackOff,或者节点内存不足导致caused "process_linux.go:101: executing setns process caused "exit status 1, 等。

如果k8s gcr.io被墙导致ImagePullBackOff,可以通过以下命令拉去镜像。

复制代码
1
2
3
docker pull ist0ne/xtrabackup docker tag ist0ne/xtrabackup:latest gcr.io/google-samples/xtrabackup:1.0

注意如果是云集群,那个每个节点都要执行以上命令拉去镜像。

3,chown: changing ownership of ‘/var/lib/mysql/’: Operation not permitted

这个是由nfs的权限设置导致的。具体解决办法见NFS安装步骤–主节点执行。

安装步骤

  • 一、环境准备–NFS

复制代码
1
2
3
4
5
我的k8s有三个节点,一个master,两个node分别是以下三个。 192.168.0.11 k8s-master 192.168.0.22 k8s-node1 192.168.0.33 k8s-node2

1,主节点执行

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum -y install nfs-utils rpcbind mkdir -p /home/nfs vi /etc/exports #加入,注意加入no_root_squash,这个是放开root权限,不加的话会报chown: changing ownership of '/var/lib/mysql/': Operation not permitted错误。 /home/nfs *(insecure,rw,async,no_root_squash) #启动并设置开机启动 systemctl start rpcbind.service systemctl status rpcbind.service systemctl enable rpcbind.service systemctl start nfs.service systemctl enable nfs.service systemctl status nfs.service

2,node节点验证nfs可用

复制代码
1
2
3
4
5
6
7
yum -y intall nfs-utils showmount -e 192.1681.0.11 # 挂载至本地/mnt目录 mount -t nfs 192.1681.0.11:/home/nfs /mnt df -h umount /mnt
  • 二、配置PersistentVolumeClaims

1,配置account并配置相关权限(rabc.yaml),并使用 kubectl apply -f rabc.yaml执行,该文件主要设置nfs相关角色、权限信息。

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# rabc.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default #根据实际环境设定namespace,下面类同 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io

2,创建StorageClass,并使用 kubectl apply -f storage-class.yaml执行,

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storag provisioner: nfs #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致 parameters: type: gp2 reclaimPolicy: Retain allowVolumeExpansion: true mountOptions: - debug volumeBindingMode: Immediate

3,创建provisioner,并使用 kubectl apply -f nfs-provisioner.yaml执行

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# nfs-provisioner.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default #与RBAC文件中的namespace保持一致 spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致 - name: NFS_SERVER value: 192.168.0.11 #NFS Server IP地址 - name: NFS_PATH value: /home/nfs #NFS挂载卷 volumes: - name: nfs-client-root nfs: server: 192.168.0.11 #NFS Server IP地址 path: /home/nfs #NFS 挂载卷

4,创建PersistentVolumeClaim,,并使用 kubectl apply -f nfs-pvc.yaml执行

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# pvc-nfs.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 8Gi storageClassName: managed-nfs-storag
  • 三、创建mysql集群

1,创建mysql-configmap,该文件主要是mysql的配置信息,这里设置mysql主节点与子节点的读写策略。

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# mysql-configmap.yaml kind: ConfigMap apiVersion: v1 metadata: name: mysql namespace: default labels: app: mysql data: master.cnf: | # Apply this config only on the master. [mysqld] log-bin slave.cnf: | # Apply this config only on slaves. [mysqld] super-read-only

2,创建mysql-services.yaml

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: ports: - name: mysql port: 3306 clusterIP: None selector: app: mysql --- apiVersion: v1 kind: Service metadata: name: mysql-read labels: app: mysql spec: ports: - name: mysql port: 3306 selector: app: mysql

3,创建mysql-statefulset.yaml

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
kind: StatefulSet apiVersion: apps/v1 metadata: name: mysql namespace: default spec: replicas: 3 selector: matchLabels: app: mysql template: metadata: creationTimestamp: null labels: app: mysql spec: volumes: - name: conf emptyDir: {} - name: config-map configMap: name: mysql defaultMode: 420 initContainers: - name: init-mysql image: 'mysql:5.7' command: - bash - '-c' - | set -ex # Generate mysql server-id from pod ordinal index. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] > /mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/master.cnf /mnt/conf.d/ else cp /mnt/config-map/slave.cnf /mnt/conf.d/ fi resources: {} volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent - name: clone-mysql image: 'gcr.io/google-samples/xtrabackup:1.0' command: - bash - '-c' - > set -ex # Skip the clone if data already exists. [[ -d /var/lib/mysql/mysql ]] && exit 0 # Skip the clone on master (ordinal index 0). [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} [[ $ordinal -eq 0 ]] && exit 0 # Clone data from previous peer. ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql # Prepare the backup. xtrabackup --prepare --target-dir=/var/lib/mysql resources: {} volumeMounts: - name: nfs-pvc mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent containers: - name: mysql image: 'mysql:5.7' ports: - name: mysql containerPort: 3306 protocol: TCP env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: '1' resources: requests: cpu: 50m memory: 50Mi volumeMounts: - name: nfs-pvc mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d livenessProbe: exec: command: - mysqladmin - ping initialDelaySeconds: 30 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 readinessProbe: exec: command: - mysql - '-h' - 127.0.0.1 - '-e' - SELECT 1 initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 2 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent - name: xtrabackup image: 'gcr.io/google-samples/xtrabackup:1.0' command: - bash - '-c' - > set -ex cd /var/lib/mysql # Determine binlog position of cloned data, if any. if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then # XtraBackup already generated a partial "CHANGE MASTER TO" query # because we're cloning from an existing slave. (Need to remove the tailing semicolon!) cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in # Ignore xtrabackup_binlog_info in this case (it's useless). rm -f xtrabackup_slave_info xtrabackup_binlog_info elif [[ -f xtrabackup_binlog_info ]]; then # We're cloning directly from master. Parse binlog position. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 rm -f xtrabackup_binlog_info xtrabackup_slave_info echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}', MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in fi # Check if we need to complete a clone by starting replication. if [[ -f change_master_to.sql.in ]]; then echo "Waiting for mysqld to be ready (accepting connections)" until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done echo "Initializing replication from clone position" mysql -h 127.0.0.1 -e "$(<change_master_to.sql.in), MASTER_HOST='mysql-0.mysql', MASTER_USER='root', MASTER_PASSWORD='', MASTER_CONNECT_RETRY=10; START SLAVE;" || exit 1 # In case of container restart, attempt this at-most-once. mv change_master_to.sql.in change_master_to.sql.orig fi # Start a server to send backups when requested by peers. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root" tail -f /dev/null ports: - name: xtrabackup containerPort: 3307 protocol: TCP resources: requests: cpu: 50m memory: 50Mi volumeMounts: - name: nfs-pvc mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} schedulerName: default-scheduler volumeClaimTemplates: - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs-pvc creationTimestamp: null spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi storageClassName: managed-nfs-storage volumeMode: Filesystem status: phase: Pending

最后

以上就是结实金鱼最近收集整理的关于k8s安装集群版mysql以及遇到的坑的全部内容,更多相关k8s安装集群版mysql以及遇到内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(75)

评论列表共有 0 条评论

立即
投稿
返回
顶部