Pod 管理

随笔2个月前发布 凤羽轻扬
24 0 0

在k8s集群里面,k8s调度的最小单位 pod,pod里面跑容器(containerd)。

1 创建 Pod

kubectl create ns pod323  # 创建命名空间
kubens pod323  # 切换命名空间
kubectl run pod1 --image nginx  # 创建一个nginx 镜像的pod
kubectl get pod  # 查看已经创建的pod
kubectl get pod -o wide  # 查看已经创建pod的详细信息
kubectl describe pod pod1  # 查看指定pod的日志信息

2. 镜像下载策略

Always:它每次都会联网检查最新的镜像,不管你本地有没有,都会到互联网上(动作:会有联网检查这个动作)
Never:它只会使用本地镜像,从不下载
IfNotPresent:它如果检测本地没有镜像,才会联网下载。

kubectl run pod1 --image nginx --image-pull-policy IfNotPresent --dry-run=client -o yaml -- sleep 3600 > pod2.yaml 
# --image-pull-policy 指定镜像的下载策略为IfNotPresent
# --dry-run=client 预运行一个容器,并不会运行和创建
# -o 输出文件的格式,可以是yaml或者jsion等格式,通常重定向到文件当中,用于生成pod的配置文件
kubectl exec -ti pod1 -- bash  # 进入到一个指定的容器当中,当一个pod中有多个容器,可以使用-c参数

3. pod的生命周期和重启策略

容器运行的是进程,这个进程是由镜像定义的。如果定义错了命令,这时候创建该pod,它在不断尝试重启,这个是由restartPolicy参数决定的。pod的重启策略有如下三种:

Always: 一直,正常退出的,非正常退出的,错误的,都重启
Never: 从未,不管是正常的,还是不正常的,都不重启
OnFailure: 遇到(命令)错误才会重启,正常退出是不会重启的。

其yaml文件格式为restartPolicy: Always

4. 初始化容器

kubernetes 1.3版本引入了init container 初始化容器特性。主要用于在启动应用容器(app container)前来启动一个或多个初始化容器,作为应用容器的一个基础。所有的初始化容器加载运行完成后,才能运行应用容器。

创建一个初始pod的yaml文件

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: initpod1
  name: initpod1
spec:
  initContainers:
  - name: initpod
    image: alpine
    imagePullPolicy: IfNotPresent
    command: ["/sbin/sysctl","-w","vm.swappiness=35"]
    securityContext:
      privileged: true
  containers:
  - name: pod1
    image: nginx
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        memory: "128Mi"
        cpu: "500m"
  dnsPolicy: ClusterFirst
  restartPolicy: Always

5. 静态pod

静态pod是一个yaml文件形式的pod,文件存在则pod存在,文件不存在则pod不存在。静态pod,注意不要在master上操作,因为master上跑的是集群核心静态pod,在knode1或knode2上去做。

创建静态pod

  • 创建一个目录

mkdir /etc/kubernetes/test

  • 修改配置文件

cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf 
~# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --pod-manifest-path=/etc/kubernetes/test"

在环境变量参数里面加上 --pod-manifest-path=/etc/kubernetes/test ,指向创建的目录

  • 在test目录里编写yaml文件

kubectl run pod1 --image nginx --image-pull-policy IfNotPresent --dry-run=client -o yaml > pod1.yaml

cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

  • 查询静态pod

kubectl get pod
NAME          READY   STATUS              RESTARTS   AGE
pod1-knode1   0/1     ContainerCreating   0          7s
# 此时在master节点上查看就会看到一个静态pod

6. pod标签

标签都是以键值对的形式出现,可以用在指定在哪个节点上运行pod。

标签命令

kubectl label nodes knode1 aaa=knode1  # 为主机定义一个标签
kubectl get nodes knode1 --show-labels  # 查询主机标签
NAME     STATUS   ROLES    AGE     VERSION   LABELS
knode1   Ready    <none>   5d21h   v1.26.0   aaa=knode1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=knode1,kubernetes.io/os=linux
kubectl get pods/pod1 --show-labels  # 查询pod标签
kubectl label pods/pod1 aaa-  # 删除pod标签
kubectl label nodes knode1 aaa-  # 删除主机上的标签

利用标签指定pod在哪个主上运行

kubectl label nodes knode2 disktype=ssdnvme
kubectl get nodes knode2 --show-labels
NAME     STATUS   ROLES   AGE     VERSION   LABELS
knode2   Ready    node2   5d21h   v1.26.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssdnvme,kubernetes.io/arch=amd64,kubernetes.io/hostname=knode2,kubernetes.io/os=linux,node-role.kubernetes.io/node2=
vim pod2.yaml
kubectl apply -f pod2.yaml 
kubectl get pod -o wide
NAME          READY   STATUS    RESTARTS      AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod1          1/1     Running   0             30m   10.244.69.197    knode2   <none>           <none>
pod1-knode1   1/1     Running   1 (38m ago)   38m   10.244.195.134   knode1   <none>           <none>
pod2          1/1     Running   0             5s    10.244.69.198    knode2   <none>           <none>

pod2.yml文件

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  nodeSelector:
    disktype: ssdnvme
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@kmaster

手工指定的nodeSelector ,要么指定对,要么不指定,指定错了,pod会挂起

7. roles

命令格式

kubectl get nodes 
NAME      STATUS   ROLES           AGE     VERSION
kmaster   Ready    control-plane   5d21h   v1.26.0
knode1    Ready    <none>          5d21h   v1.26.0
knode2    Ready    <none>          5d21h   v1.26.0
kubectl label nodes knode1 node-role.kubernetes.io/node1=  # 新增一个roles
node/knode1 labeled
kubectl get nodes 
NAME      STATUS   ROLES           AGE     VERSION
kmaster   Ready    control-plane   5d21h   v1.26.0
knode1    Ready    node1           5d21h   v1.26.0
knode2    Ready    <none>          5d21h   v1.26.0
kubectl  label nodes knode1 node-role.kubernetes.io/node1- # 删除roles
node/knode1 unlabeled
kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
kmaster   Ready    control-plane   3d21h   v1.26.0
knode1    Ready    <none>          3d21h   v1.26.0
knode2    Ready    <none>          3d21h   v1.26.0

8. cordon

cordon 警戒线(警告):一旦设置了cordon,新的pod是不允许被调度的。

命令格式

kubectl cordon knode1  # 为主机设置cordon
kubectl get nodes 
NAME      STATUS                               ROLES            AGE     VERSION
kmaster   Ready                                   control-plane   5d21h   v1.26.0
knode1    Ready,SchedulingDisabled   node1             5d21h   v1.26.0
knode2    Ready                                    node2             5d21h   v1.26.0
kubectl uncordon knode1  # 取消rordon

kubectl apply -f pod1.yml 
pod/pod1 created
kubectl apply -f pod2.yml 
pod/pod2 created
 kubectl get pod -o wide 
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
initpod1    1/1     Running   0          34h   10.244.69.211    knode2   <none>           <none>
initpod2    1/1     Running   0          23h   10.244.195.152   knode1   <none>           <none>
lablepod1   1/1     Running   0          21h   10.244.69.212    knode2   <none>           <none>
lablepod2   1/1     Running   0          21h   10.244.69.213    knode2   <none>           <none>
pod1        1/1     Running   0          11s   10.244.69.214    knode2   <none>           <none>
pod2        2/2     Running   0          7s    10.244.69.215    knode2   <none>           <none>
[root@kmaster pod_text]# kubectl get pod -o wide 
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
initpod1    1/1     Running   0          34h   10.244.69.211    knode2   <none>           <none>
initpod2    1/1     Running   0          23h   10.244.195.152   knode1   <none>           <none>
lablepod1   1/1     Running   0          21h   10.244.69.212    knode2   <none>           <none>
lablepod2   1/1     Running   0          21h   10.244.69.213    knode2   <none>           <none>
pod1        1/1     Running   0          14s   10.244.69.214    knode2   <none>           <none>
pod2        2/2     Running   0          10s   10.244.69.215    knode2   <none>           <none>
[root@kmaster pod_text]# kubectl delete -f pod1.yml 

9. drain

cordon对于已经在该节点上运行的pod,一旦设置了drain,不仅会cordon,还会evicted驱逐。(本意是把该节点上的pod删除掉,并在其他node上启动)。

命令格式

kubectl apply -f deployment1.yaml 
deployment.apps/web1 created
kubectl  get pod -o wide 
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
web1-6cc467757-4b9hz   1/1     Running   0          29s   10.244.195.160   knode1   <none>           <none>
web1-6cc467757-587jl   1/1     Running   0          29s   10.244.69.220    knode2   <none>           <none>
web1-6cc467757-l9xfp   1/1     Running   0          29s   10.244.195.162   knode1   <none>           <none>
web1-6cc467757-nph9f   1/1     Running   0          29s   10.244.195.161   knode1   <none>           <none>
web1-6cc467757-tk6r8   1/1     Running   0          29s   10.244.69.221    knode2   <none>           <none>
web1-6cc467757-zl77j   1/1     Running   0          29s   10.244.195.159   knode1   <none>           <none>
kubectl drain knode2 --ignore-daemonsets --force
node/knode2 cordoned
Warning: ignoring DaemonSet-managed Pods: calico-system/calico-node-qwb8n, calico-system/csi-node-driver-l2hmk, kube-system/kube-proxy-hp5ts
evicting pod tigera-operator/tigera-operator-54b47459dd-gdgrg
evicting pod calico-apiserver/calico-apiserver-76b5b7d597-ffjd6
evicting pod default/web1-6cc467757-587jl
evicting pod calico-system/calico-typha-85568b8955-mxld6
evicting pod default/web1-6cc467757-tk6r8
pod/tigera-operator-54b47459dd-gdgrg evicted
pod/calico-apiserver-76b5b7d597-ffjd6 evicted
pod/web1-6cc467757-tk6r8 evicted
pod/web1-6cc467757-587jl evicted
pod/calico-typha-85568b8955-mxld6 evicted
node/knode2 drained
kubectl get pod -o wide 
NAME                   READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
web1-6cc467757-2wk7f   1/1     Running   0          4m30s   10.244.195.163   knode1   <none>           <none>
web1-6cc467757-4b9hz   1/1     Running   0          13m     10.244.195.160   knode1   <none>           <none>
web1-6cc467757-l9xfp   1/1     Running   0          13m     10.244.195.162   knode1   <none>           <none>
web1-6cc467757-nph9f   1/1     Running   0          13m     10.244.195.161   knode1   <none>           <none>
web1-6cc467757-p95kf   1/1     Running   0          4m30s   10.244.195.164   knode1   <none>           <none>
web1-6cc467757-zl77j   1/1     Running   0          13m     10.244.195.159   knode1   <none>           <none>

deployment1.yml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web1
spec:
  replicas: 6
  selector:
    matchLabels:
      app: web1
  template:
    metadata:
      labels:
        app: web1
    spec:
      containers:
      - name: web1
        image: nginx
        imagePullPolicy: IfNotPresent
# 创建一个6副本的机制的 deployment

10. taint

taint 污点:一点设置了taint,默认调度器会直接过滤掉,不会调度到该node上,如果通过标签强行指定,则pod会被挂起。但是可以通过 tolerations 关键字来强制的运行。

命令格式

kubectl taint node knode1 wudian=app:NoSchedule  # 设置污点,最好设置成key=value:NoSchedule的形式,方便tolerations关键字来调用
node/knode1 tainted
kubectl describe nodes knode1 | grep -i taint
Taints:             wudian=app:NoSchedule
kubectl apply -f pod1.yml
kubectl get pod -o wide 
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          13s   10.244.195.165   knode1   <none>           <none>
pod2   2/2     Running   0          18m   10.244.69.222    knode2   <none>           <none>

pod1.yaml文件

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  nodeSelector:
    kubernetes.io/hostname: knode1
  tolerations:
    - key: "wudian"
      operator: "Equal"  # 如果当污点没有设置value时,可以是使用"Exists",删除value参数即可
      value: "app"
      effect: "NoSchedule"
  containers:
  - name: pod1
    image: nginx
    imagePullPolicy: IfNotPresent
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

© 版权声明

相关文章

暂无评论

您必须登录才能参与评论!
立即登录
暂无评论...