易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 3|回复: 0
收起左侧

Kube flannel in CrashLoopBackOff status 解决办法

[复制链接]
发表于 2025-1-2 17:00:00 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
kubectl describe pod -n kube-system kube-flannel-ds-amd64-42rl79 B: m. Y6 s. U  g" f: u
; ]# Y3 c- x7 t: k; t

& O3 N( b  H5 k% V- ]  o6 q7 P+ k6 s1 }3 B. r' |* V
9 l: C6 ~4 ]0 A3 {# m/ w
Name:               kube-flannel-ds-amd64-42rl7Namespace:         
$ C1 ~3 x& X$ V kube-systemPriority:           0PriorityClassName:  <none>  S4 e8 c: D, w2 @, a% h) V
Node:               node5/10.168.209.177 V! Y2 E# l1 d. S2 m# r. a0 O  u
Start Time:         Wed, 22 Aug 2018 16:47:10 +0300Labels:             app=flannel                    controller-revision-hash=911701653                    pod-template-generation=1                    tier=nodeAnnotations:        <none>Status:             RunningIP:                 10.168.209.17Controlled By:      DaemonSet/kube-flannel-ds-amd64Init Containers:  install-cni:    Container ID:  docker://eb7ee47459a54d401969b1770ff45b39dc5768b0627eec79e189249790270169    Image:         quay.io/coreos/flannel:v0.10.0-amd64    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac    Port:          <none>    Host Port:     <none>    Command:      cp    Args:      -f      /etc/kube-flannel/cni-conf.json      /etc/cni/net.d/10-flannel.conflist    State:          Terminated      Reason:       Completed      Exit Code:    0      Started:      Wed, 22 Aug 2018 16:47:24 +0300      Finished:     Wed, 22 Aug 2018 16:47:24 +0300    Ready:          True    Restart Count:  0    Environment:    <none>    Mounts:      /etc/cni/net.d from cni (rw)      /etc/kube-flannel/ from flannel-cfg (rw)      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-9wmch (ro)Containers:  kube-flannel:    Container ID:  docker://521b457c648baf10f01e26dd867b8628c0f0a0cc0ea416731de658e67628d54e    Image:         quay.io/coreos/flannel:v0.10.0-amd64    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac    Port:          <none>    Host Port:     <none>    Command:      /opt/bin/flanneld    Args:      --ip-masq      --kube-subnet-mgr    State:          Waiting      Reason:       CrashLoopBackOff    Last State:     Terminated      Reason:       Error      Exit Code:    1      Started:      Thu, 30 Aug 2018 10:15:04 +0300      Finished:     Thu, 30 Aug 2018 10:15:08 +0300    Ready:          False    Restart Count:  2136    Limits:      cpu:     100m      memory:  50Mi    Requests:      cpu:     100m      memory:  50Mi    Environment:      POD_NAME:       kube-flannel-ds-amd64-42rl7 (v1:metadata.name)      POD_NAMESPACE:  kube-system (v1:metadata.namespace)    Mounts:      /etc/kube-flannel/ from flannel-cfg (rw)      /run from run (rw)      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-9wmch (ro)Conditions:  Type              Status  Initialized       True  Ready             False  ContainersReady   False  PodScheduled      TrueVolumes:  run:    Type:          HostPath (bare host directory volume)    Path:          /run    HostPathType:  cni:    Type:          HostPath (bare host directory volume)    Path:          /etc/cni/net.d    HostPathType:  flannel-cfg:    Type:      ConfigMap (a volume populated by a ConfigMap)    Name:      kube-flannel-cfg    Optional:  false  flannel-token-9wmch:    Type:        Secret (a volume populated by a Secret)    SecretName:  flannel-token-9wmch    Optional:    falseQoS Class:       GuaranteedNode-Selectors:  beta.kubernetes.io/arch=amd64Tolerations:     node-role.kubernetes.io/master:NoSchedule                 node.kubernetes.io/disk-pressure:NoSchedule                 node.kubernetes.io/memory-pressure:NoSchedule                 node.kubernetes.io/not-ready:NoExecute                 node.kubernetes.io/unreachable:NoExecuteEvents:  Type     Reason   Age                  From            Message  ----     ------   ----                 ----            -------  Normal   Pulled   51m (x2128 over 7d)  kubelet, node5  Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine  Warning  BackOff  1m (x48936 over 7d)  kubelet, node5  Back-off restarting failed container
3 o* _7 N5 c( s1 X! ]$ {9 G- ~! A; W, a; X* s. G% z! V
检查kube-controller-manager.yaml
6 ], v' e5 ^2 I4 `  W6 ]3 _
, v6 E9 ]5 x; c& W2 O0 Z- x" M

  u: [- p2 }& M9 g% x% I7 f, d) e:apiVersion: v1* |, K& s/ D8 d3 p5 }: ^! {! w
kind: Podmetadata:  annotations:    scheduler.alpha.kubernetes.io/critical-pod: ""  creationTimestamp: null  labels:    component: kube-controller-manager    tier: control-plane  name: kube-controller-manager  namespace: kube-systemspec:  containers:  - command:    - kube-controller-manager    - --address=127.0.0.1    - --allocate-node-cidrs=true    - --cluster-cidr=192.168.0.0/24    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key    - --controllers=*,bootstrapsigner,tokencleaner    - --kubeconfig=/etc/kubernetes/controller-manager.conf    - --leader-elect=true    - --node-cidr-mask-size=24    - --root-ca-file=/etc/kubernetes/pki/ca.crt    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key    - --use-service-account-credentials=true    image: k8s.gcr.io/kube-controller-manager-amd64:v1.11.2    imagePullPolicy: IfNotPresent    livenessProbe:      failureThreshold: 8      httpGet:        host: 127.0.0.1        path: /healthz        port: 10252        scheme: HTTP      initialDelaySeconds: 15      timeoutSeconds: 15    name: kube-controller-manager    resources:      requests:        cpu: 200m    volumeMounts:    - mountPath: /etc/ssl/certs      name: ca-certs      readOnly: true    - mountPath: /etc/kubernetes/controller-manager.conf      name: kubeconfig      readOnly: true    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec      name: flexvolume-dir    - mountPath: /etc/pki      name: etc-pki      readOnly: true    - mountPath: /etc/kubernetes/pki      name: k8s-certs      readOnly: true  hostNetwork: true  priorityClassName: system-cluster-critical  volumes:  - hostPath:      path: /etc/ssl/certs      type: DirectoryOrCreate    name: ca-certs  - hostPath:      path: /etc/kubernetes/controller-manager.conf      type: FileOrCreate    name: kubeconfig  - hostPath:      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec      type: DirectoryOrCreate    name: flexvolume-dir  - hostPath:      path: /etc/pki      type: DirectoryOrCreate    name: etc-pki  - hostPath:      path: /etc/kubernetes/pki      type: DirectoryOrCreate    name: k8s-certsstatus: {}5 Z& g, o% Y. _( H$ j* O

5 s) M/ X7 |8 @7 ^1 nkubectl logs --namespace kube-system kube-flannel-ds-amd64-5fx2

/ W) v8 R* T% t7 \. z' e8 Z) {1 N% L' {% q! p7 g% D; R; t; M( K, d' A

. o2 R( S& y$ E& r2 Z" T1 zpmain.go:475] Determining IP address of default interfacemain.go:488] , f, j7 @" h4 ]; t

& {# i" G/ j+ V
+ n: h/ D/ B, I4 A
Using interface with name eth0 and address 10.168.209.14main.go:505] Defaulting external address to interface address (10.168.209.14)kube.go:131] Waiting 10m0s for node controller to synckube.go:294] Starting kube subnet managerkube.go:138] Node controller sync successfulmain.go:235] Created subnet manager: Kubernetes Subnet Manager - node2main.go:238] Installing signal handlersmain.go:353] Found network config - Backend type: vxlanvxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=falsemain.go:280] Error registering network: failed to acquire lease: node "node2" pod cidr not assignedmain.go:333] Stopping shutdownHandler...
8 P2 N+ G8 C9 [" S1 K7 O# j2 ~* ]/ A8 ^5 @, N

% V7 E3 @, H; F  P- p
2 E$ D+ \& w3 x& R

7 Y& V' U' j+ J1 R- mcat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep -i cluster-cidr- --cluster-cidr=172.168.10.0/24
6 Y0 N3 Z( S8 h: @$ N' S$ r& h+ \; ^) D$ r8 w
- _; h9 \6 q5 B4 W: n3 E
" `( V5 F- B+ T
kubectl patch node podname -p '{"spec":{"podCIDR":"172.168.10.0/24"}}'
% q8 y& u2 h& ]0 f/ \6 t% b

" b! W- z+ P3 k8 z5 s
# V& N! `3 |, b( x, W
例如:kubectl patch node slave-node-1 -p '{"spec":{"podCIDR":"172.168.10.0/24"}}'
' J" c2 ?4 U& L5 l% g! p2 o' c& ]8 q/ g0 K/ Z
/ H* E( h5 h: [$ D6 ?# ?
sudo ifconfig cni0 down;
9 L; \( v9 [6 w0 f( q4 W1 A! B# Rsudo ifconfig flannel.1 down;: x9 B7 {, d; t& r3 H
sudo ip link delete cni0;
( |" {9 M* `% C2 y" H& }sudo ip link delete flannel.1;
To fix this, please following the step below:
  • Step 0: Reset all Nodes within your Cluster. Run all nodes with
    * X4 O% Q+ I9 v/ S/ y) o
kubeadm reset --force;
  • Step 1: Down Interface cni0 and flannel.1.( H' f# |& W( E# m
sudo ifconfig cni0 down;- w4 x: T  [7 x" g. W: X3 C% a
sudo ifconfig flannel.1 down;
  • Step 2: Delete Interface cni0 and flannel.1.# Q6 T6 D' T3 ?8 X$ C0 Y# G* M
sudo ip link delete cni0;) J# j. R$ e( ?/ H! a
sudo ip link delete flannel.1;
  • Step 3: Remove all items within /etc/cni/net.d/.
    # y; P) L  ?! U( [
sudo rm -rf /etc/cni/net.d/;
  • Step 4: Re-Bootstrap your Kubernetes Cluster again.% H- h, x4 b* X! |' d
kubeadm init --control-plane-endpoint="..." --pod-network-cidr=10.244.0.0/16;
  • Step 5: Re-deploy CNIs./ k2 B- L6 Y3 w5 j! o1 b* G
  • Step 6: Restart your CNIs, here I used Container Daemon (Containerd).
    & D; D4 B9 q0 q) G1 f
systemctl restart containerd;

( Z9 \1 r% a$ Z* J( {2 e. @
' Y/ [9 O% D' x/ K7 j5 p

8 r" F4 y+ M& |- n# c& b" V
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 ( 蜀ICP备2026014127号-1 )点击这里给我发消息

GMT+8, 2026-4-8 21:29 , Processed in 0.046677 second(s), 22 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表