Kubernetes 存储

随笔8个月前发布 鋒華
80 0 0

1. 本地存储

1.1. emptyDir

对于emptyDir来说,会在pod所在的物理机上生成一个随机目录。pod的容器会挂载到这个随机目录上。当pod容器删除后,随机目录也会随之删除。适用于多个容器临时共享数据。在pod删除后,该目录也会随之删除。

yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: emptydirpod
  labels:
    name: enptydirpod10
spec:
  volumes: 
    - name: v1
      emptyDir: {}
  containers:
    - name: enptydirpod10
      image: nginx
      volumeMounts: 
        - name: v1
          mountPath: /abc
      resources:
        limits:
          memory: "128Mi"
          cpu: "500m"

kubectl get pod -o wide 
NAME          READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
emptydirpod   1/1     Running   0          24h   10.244.69.226   knode2   <none>           <none>
# 可以在knode2 上查看具体时挂在那个临时目录
crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
2f18c27e13ffa       c20060033e06f       7 hours ago         Running             nfspod                      0                   9db51a9ea7ccb       nfspod
b0a0211723a6a       c20060033e06f       25 hours ago        Running             enptydirpod10               0                   6ef3e4bce7ea5       emptydirpod
ad94cf343599b       7b60c7012b1c9       35 hours ago        Running             calico-typha                0                   2f01766ac16fe       calico-typha-85568b8955-js254
1153efb242afb       c14671fdda128       5 days ago          Running             csi-node-driver-registrar   2                   139ccac6c83dc       csi-node-driver-l2hmk
843067cc582ac       f37eacbb9a295       5 days ago          Running             calico-csi                  2                   139ccac6c83dc       csi-node-driver-l2hmk
e25fd796f2735       08616d26b8e74       5 days ago          Running             calico-node                 2                   e039f3f2d6053       calico-node-qwb8n
b056bbe270c17       556768f31eb1d       5 days ago          Running             kube-proxy                  2                   255cf55426cc0       kube-proxy-hp5ts
crictl inspect b0a0211723a6a 
    "mounts": [
      {
        "containerPath": "/abc",
        "hostPath": "/var/lib/kubelet/pods/b08bb846-9199-41bf-8423-4dbd0db88ee0/volumes/kubernetes.io~empty-dir/v1",
        "propagation": "PROPAGATION_PRIVATE",
        "readonly": false,
        "selinuxRelabel": false

1.2. hostPath

挂载卷的方式和emptyDir是一样的,只不过不是随机的,是我们自定义的目录,把自定义的目录挂载到容器里面。

yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: emptydirpod
  labels:
    name: enptydirpod10
spec:
  volumes: 
    - name: v1
      hostPath:
        path: /data
  containers:
    - name: enptydirpod10
      image: nginx
      volumeMounts: 
        - name: v1
          mountPath: /abc
      resources:
        limits:
          memory: "128Mi"
          cpu: "500m"

这样自定义的目录,可以永久保留。如果上面的pod是在knode2上运行的,目录也是在knode2上的永久目录,但是未来该pod万一调度到了knode1上面了,就找不到数据了。保存的记录就没有了。没有办法各个节点之间进行同步。

2. 网络存储

网络存储支持很多种类型 nfs/ceph/iscsi等都可以作为后端存储来使用。

2.1. 挂载NFS网络存储

在NFS服务端上的配置

yum install -y yum-utils vim bash-completion net-tools wget nfs-utils
mkdir /nfsdata
systemctl start nfs-server.servic
systemctl enable nfs-server.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.

systemctl stop firewalld.service 
systemctl disable firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
setenforce 0
vim /etc/selinux/config 

vim /etc/exports
/nfsdata *(rw,async,no_root_squash)
exportfs -arv  # 不用重启nfs服务,配置文件就会生效
# no_root_squash:登入 NFS 主机使用分享目录的使用者,如果是 root 的话,那么对于这个分享的目录来说,他就具有 root 的权限!这个项目『极不安全』,不建议使用。以root身份写。
# exportfs命令
# -a 全部挂载或者全部卸载
# -r 重新挂载
# -u 卸载某一个目录
# -v 显示共享目录

在kubernetes集群knode节点上的配置

yum install -y nfs-utils   
#  虽然未来的pod要连接nfs,但是真正连接nfs的是pod所在的物理主机。所以作为物理主机(客户端)也要安装nfs客户端。

yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: nfspod
  labels:
    name: nfspod
spec:
  volumes:
    - name: nfsv1
      nfs:
        server: 172.16.100.183
        path: /nfsdate
  containers:
  - name: nfspod
    image: nginx
    volumeMounts:
      - name: nfsv1
        mountPath: /nfsvolumes
    resources:
      limits:
        memory: "128Mi"
        cpu: "500m

在kubernetes集群kmaster节点上的配置

kubectl apply -f nfspod.yaml 
kubectl get pod -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
nfspod          1/1          Running    0                7h    10.244.69.227  knode2   <none>                       <none>
kubectl exec -ti nfspod -- bash
root@nfspod:/# df -h
Filesystem               Size  Used Avail Use% Mounted on
overlay                   64G  4.8G   60G   8% /
tmpfs                     64M     0   64M   0% /dev
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
172.16.100.183:/nfsdate   64G  2.3G   62G   4% /nfsvolumes
/dev/mapper/cs-root       64G  4.8G   60G   8% /etc/hosts
shm                       64M     0   64M   0% /dev/shm
tmpfs                    128M   12K  128M   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                    1.9G     0  1.9G   0% /proc/acpi
tmpfs                    1.9G     0  1.9G   0% /proc/scsi
tmpfs                    1.9G     0  1.9G   0% /sys/firmware

在knode2上查看物理节点是否有挂载

df -h | grep nfs
172.16.100.183:/nfsdate   64G  2.3G   62G   4% /var/lib/kubelet/pods/e227f191-21d8-455f-a6e9-d6ce0024f0f3/volumes/kubernetes.io~nfs/nfsv1

3. 持久化存储

持久化存储有两个类型:PersistentVolume/PersistentVolumeClaim

master上有很多命名空间 N1和N2,不同用户连接到不同的命名空间里面。用户管理pod,而专员管理存储,分开管理。
下面是一个存储服务器,共享多个目录,由专门的管理员管理。管理员会在集群中创建 PersistentVolume(PV),这个PV是全局可见的。该PV会和存储服务器中的某个目录关联。
用户要做的就是创建自己的 PVC,PVC是基于命名空间进行隔离的。而PV是全局可见的。之后把PVC和PV关联在一起。

创建pv yaml文件

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: test
  nfs:
    path: /nfsdate
    server: 172.16.100.183

kubectl apply -f  pv.yml
kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01   5Gi        RWO            Recycle          Available           slow                    12s


创建pvc yaml文件

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc01
spec:
  resources:
    requests:
      storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: test

kubectl apply -f  pvc.yml
kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc01   Bound    pv01     5Gi        RWO            test           7m11s

注意:pv和pvc的yaml文件storageClassName参数要么不写自动绑定,要么都写并且要保持一直,要不然pv和pvc会绑定不成功,pvc会一直处于Pending状态。

挂载pod的yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: pvpod
  labels:
    name: pvpod
spec:
  containers:
  - name: pvpod
    image: nginx
    volumeMounts:
      - mountPath: "/var/www/html"
        name: mypv01
    resources:
      limits:
        memory: "128Mi"
        cpu: "500m"
  volumes:
    - name: mypv01
      persistentVolumeClaim:
        claimName: pvc01

kubectl apply -f pvpod.yml 
kubectl get pod -o wide 
NAME          READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
pvpod         1/1     Running   0          6s      10.244.195.166   knode1   <none>           <none>
kubectl exec -it pvpod -- bash  # 进入到pod进行查看
df -h
Filesystem               Size  Used Avail Use% Mounted on
overlay                   64G  4.8G   60G   8% /
tmpfs                     64M     0   64M   0% /dev
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/cs-root       64G  4.8G   60G   8% /etc/hosts
shm                       64M     0   64M   0% /dev/shm
172.16.100.183:/nfsdate   64G  2.3G   62G   4% /var/www/html
tmpfs                    128M   12K  128M   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                    1.9G     0  1.9G   0% /proc/acpi
tmpfs                    1.9G     0  1.9G   0% /proc/scsi
tmpfs                    1.9G     0  1.9G   0% /sys/firmware

回收策略

回收策略:默认retain(NFS支持recycle,理论上recycle删除pvc,也会删除nfs数据,同时pv状态变为available)

reclaimPolicy两种常用取值:DeleteRetain
Delete:表示删除PVC的时候,PV也会一起删除,同时也删除PV所指向的实际存储空间;
Retain:表示删除PVC的时候,PV不会一起删除,而是变成Released状态等待管理员手动清理;retain是不回收数据,删除pvc后,pv不可用,并长期保持released状态。
Delete的优缺点:
优点:实现数据卷的全生命周期管理,应用删除PVC会自动删除后端云盘。能有效避免出现大量闲置云盘没有删除的情况。
缺点:删除PVC时候一起把后端云盘一起删除,如果不小心误删pvc,会出现后端数据丢失;
Retain的优缺点:
优点:后端云盘需要手动清理,所以出现误删的可能性比较小;
缺点:没有实现数据卷全生命周期管理,常常会造成pvc、pv删除后,后端云盘闲置往清理,长此以往导致大量磁盘浪费。

4. 利用nfs配置动态卷供应流程

利用nfs配置动态卷供应流程:(整个过程是不需要创建pv的)

  1. 配置 NFS 服务器
  2. 获取 NFS Subdir External Provisioner 文件,地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy
  3. 如果集群启用了 RBAC,设置授权。
  4. 配置 NFS subdir external provisioner
  5. 创建 Storage Class
  6. 创建 PVC 和 Pod 进行测试
  • 配置 NFS 服务器

mkdir /nfsdate
vim /etc/exports
exportfs -arv
exporting *:/vdisk
cat /etc/exports
/vdisk *(rw,async,no_root_squash)
#  获取 NFS Subdir External Provisioner 文件
yum install -y git
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git

  • 如果集群启用了 RBAC,设置授权。

# 上传到master,解压
ls
class.yaml  deployment.yaml  kustomization.yaml  objects  rbac.yaml  test-claim.yaml  test-pod.yaml
vim rbac.yaml 
vim rbac.yaml 
sed -i 's/namespace: default/namespace: vol/g' rbac.yaml  # 更换命名空间
kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

  • 配置 NFS subdir external provisioner

cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: vol
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/cloudcs/nfs-subdir-external-provisioner:v4.0.2
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.100.143
            - name: NFS_PATH
              value: /vdisk
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.100.143
            path: /vdisk
# 更换image镜像源,nfs的server IP和path路径
# 还有一个镜像,也是通过华为云香港主机下载,直接push到阿里云
# image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

# 所有节点提前下载该镜像
crictl pull registry.cn-hangzhou.aliyuncs.com/cloudcs/nfs-subdir-external-provisioner:v4.0.2

# 注意修改deployment.yaml 里面的image路径为 registry.cn-hangzhou.aliyuncs.com/cloudcs/nfs-subdir-external-provisioner:v4.0.2

kubectl apply -f deployment.yaml 
deployment.apps/nfs-client-provisioner created

  • 创建 Storage Class

class yaml文件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

kubectl apply -f class.yml
kubectl get sc
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  30s

创建pvc的yaml文件

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvctest
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs-client
  resources:
    requests:
      storage: 5Gi

kubectl get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvctest   Bound    pvc-58dfcbc5-acc9-4c52-8b8b-8d917ae37403   5Gi        RWO            nfs-client     4s
kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pvc-58dfcbc5-acc9-4c52-8b8b-8d917ae37403   5Gi        RWO            Delete           Bound    volumes/pvctest   nfs-client              8s

这里只需要创建pvc,pv由StorageClass类来创建,因此storageClassName需要和class类yaml文件中metadata.name一致。

© 版权声明

相关文章

暂无评论

您必须登录才能参与评论!
立即登录
暂无评论...