我们在使用kubernetes/docker时最重要的是要对数据做持久化,ceph做为一个开源的分布式存储系统,支持对象存储,块存储rbd,文件系cephfs,同时具备高可靠,易扩展等特性,kubernetes+ceph在很多企业已经落地实施,也是目前最成熟可靠的方案,本篇博客主要为大家讲解基于storageclass动态管理存储的整个生命周期
1.创建kubernetes对接ceph需要的外部客户端 rbd-provisioner
注:假如你的kubernetes集群是由kubeadm初始化时,那么kube-controller-manager本身是没有rbd的命令,所以需要添加一个外部插件 quay.io/external_storage/rbd-provisioner:latest,添加方法如下
cat rbd-provisioner.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85
| kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: default roleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: rbd-provisioner rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rbd-provisioner roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: default --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: rbd-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: rbd-provisioner spec: containers: - name: rbd-provisioner image: quay.io/external_storage/rbd-provisioner:latest env: - name: PROVISIONER_NAME value: ceph.com/rbd serviceAccount: rbd-provisioner --- apiVersion: v1 kind: ServiceAccount metadata: name: rbd-provisione
|
kubectl apply -f rbd-provisioner.yaml
2.创建ceph-secret这个k8s的secret对象,这个secret对象用于k8s volume插件访问ceph集群用的
(1)在ceph集群的管理节点获取client.admin的keyring值,并用base64编码,在ceph集群的管理节点上操作
ceph auth get-key client.admin | base64
获取的结果如下所示
QVFBczlGOWRCVTkrSXhBQThLa1k4VERQQjhVT29wd0FnZkNDQmc9PQ==
(2)创建secret
cat ceph-secret.yaml
1 2 3 4 5 6 7
| apiapiVersion: v1 kind: Secret metadata: name: ceph-secret type: "ceph.com/rbd" data: key: QVFBczlGOWRCVTkrSXhBQThLa1k4VERQQjhVT29wd0FnZkNDQmc9PQ==
|
kubectl apply -f ceph-secret.yaml
3.创建storageclass
(1)在ceph集群的管理节点创建pool池
ceph osd pool create k8stest 256
rbd create rbda -s 1024 -p k8stest
rbd feature disable k8stest/rbda object-map fast-diff deep-flatten
(2)在k8s集群的master节点创建storageclass
cat storageclass.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: k8s-rbd provisioner: ceph.com/rbd parameters: monitors: 192.168.199.201:6789 adminId: admin adminSecretName: ceph-secret pool: k8stest userId: admin userSecretName: ceph-secret fsType: xfs imageFormat: "2" imageFeatures: "layering"
|
注:storageclass参数说明
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| monitors: 192.168.199.201:6789 pool: k8stest userId: admin userSecretName: ceph-secret fsType: xfs imageFormat: "2" imageFeatures: "layering"
|
kubectl apply -f storageclass.yaml
4.statefulset+ceph最佳实践测试
cat stat.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
| apiVersion: v1 kind: Service metadata: name: storage labels: app: storage spec: ports: - port: 80 name: web clusterIP: None selector: app: storage --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: storage spec: serviceName: "storage" replicas: 2 template: metadata: labels: app: storage spec: containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] volumeMode: Filesystem storageClassName: k8s-rbd resources: requests: storage: 1Gi
|
kubectl apply -f state.yaml
上面命令执行之后,会自动生成pvc,pv,然后把pvc和pv绑定
上面的pv是通过storageclass这个存储类动态生成的