易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 678|回复: 10
收起左侧

kubernetes集群实施步骤k8s实施步骤

[复制链接]
发表于 2024-9-17 10:35:55 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
kubernetes集群实施步骤k8s实施步骤
" B& \; \  {1 F7 Z: F3 H2 t3 N' n* S/ {7 o; _9 M% g& @

: u: Y) ]% E# V4 m( j" R2 j一:准备环境/ P% |1 }2 D/ m) a! o& l& S
   服务器规划:
# h, N$ F7 v8 f9 x5 b* o, l, d; ~! p名称                ip地址
7 U, B4 `. y+ K& w: Pk8s-master       172.24.118.182+ y" C' T, S% e; B  n
k8s-node1        172.24.118.183% [6 F/ A& k. Y
k8s-node2        172.24.118.184/ s- V% K: e3 W  V/ t

" k3 J) z3 D) f$ m8 s$ E7 ~  U# j   服务器要求:7 o3 K# D5 X. i; x) t' j, f
   最小建议硬件配置:    4C  8G  50G# V9 ~1 Z/ F# M% P3 Q
   服务器可以访问互联网,可以联网下载docker镜像7 V" j7 H, C4 f8 M, ~2 u0 l
) _( x0 A: \7 g8 o. r& x) }6 g
  软件环境:$ O2 l' j5 f9 z, f4 J! W- h3 ?3 ^
       软件                               版本( l1 H, V8 g5 `: ]- Y0 Y
    操作系统                           CentOS7.9_x86_64
* K& O6 ?( r( ~. P     docker                              22及以上 (CE)6 L" m1 [5 \9 ~8 c
     kubernetes                         1.28$ t5 T* [  i! C& K: g. U
* H' c4 T0 H* d- p9 E% L# L5 i
3 q% p; Q  m2 T/ Q0 G
二:初始化配置; G( ?4 b. ?# a. \7 j
, _0 J( C1 v1 k, i  ~: w0 T
##关闭selinux" N. {: I# L8 U0 Z9 o! O" S
sed -i 's/enforcing/disabled/'   /etc/selinux/config        #永久修改
+ n. L; H8 {1 M
4 a9 s1 Y9 N5 C' ssetenforce 0        #临时生效
; W" G5 f; h" F
* B, L5 D0 ]0 S; L8 d关闭swap
' x) X4 Q1 z! }; v  swapoff -a   #临时关闭swap分区
; R# e! q3 g) S& l  sed -ri  's/.*swap.*/#&/'  /etc/fstab        ##永久关闭。    默认操作系统安装时不分配swap分区
% u! I6 l9 T: o
! u/ g' E2 d" K+ r3 ]) t. X9 V+ N4 |
6 b9 A- {% g0 ~7 {/ x% @: K
根据规划设置主机名:
( B7 W+ L6 I& A6 K& R) F- z2 T  主节点:7 q1 y* {6 y8 d
/ j0 w# d/ I% w6 n; k1 F
[root@test111 ~]# hostnamectl  set-hostname kubernetes-master; j$ [, l* z2 _& @+ L5 \  h0 a

  k6 R" N# v* C  从节点:
; h4 K# ^  M2 d* [+ X9 M    #hostnamectl set-hostname kubernetes-node1
# W6 b' b- v, x- C0 O: Y0 h   #hostnamectl set-hostname kubernetes-node2
, ?& a: F+ ~, i; G5 d+ T0 |+ h' B7 E4 j  P; {* E5 l
网络桥接数据包通过iptables处理,启用内核参数:! G) k$ T2 e  q2 K1 U3 A  }

* Q: |5 W4 w! h5 G% P) w
1 {' y- Z: |: p( ?% j0 o2 B! j" ~cat > /etc/sysctl.d/k8s.conf << EOF: x8 G6 P3 P8 U3 f: E) H+ W$ Z
net.bridge.bridge-nf-call-ip6tables = 1
( f  Q7 ?" P+ p3 Znet.bridge.bridge-nf-call-iptables = 1: f' f2 G' C% q! J" v& N
EOF8 f# `8 h! ?$ k8 q- F% b" r

, J0 D4 h3 H  Q7 b7 z: v" {" L9 U6 T4 j7 A4 M' d6 b$ y
0 B9 S* T# r' \+ t. V) u) h) H
  或者使用sysctl.conf文件进行修改& i  F& c. S* u! s7 r# ^  h
sysctl -p   ###    sysctl.conf 文件生效
( o: C3 ^$ n0 F2 B2 N. z/ Fsysctl --system        #生效   所有的内核参数文件( ?; |$ [: M' z' `6 G# L

  p$ b' X  Y: a0 _2 Csysctl --system /etc/sysctl.d/k8s.conf
7 D( I) v9 @  L. z; v# F8 r# }
$ _: k8 ]( l0 G( Y% k; L三、配置时间同步服务:$ V) E  ~4 B; N) W0 t
   yum install -y chrony
  ^3 c& E" [$ A% X4 m. Y0 ~) h  [+ I: O; m8 N: K  I+ s
  此处略过; q! c+ m( ?8 a8 g8 i/ Q* U" q
主节点:& E8 A# `( U- x9 M2 L$ @& H) P
vim /etc/chrony.conf
- ?' z# ~- {) `server xxxx  iburst
# D/ ?6 a! k( Q# p4 y6 F
3 o. o2 l. O0 Zallow  x.x.x.x/24/ z1 D. R( b0 h4 H# p5 k
+ m' ]5 r2 d  }) `& q+ Q1 P$ j
重启chronyd服务
* P5 H% U1 J# \3 B; {即可
, o( o7 L! y: r6 d1 h5 [  K$ H2 T1 m5 J+ }3 `" N! G
node节点:
3 P8 ?: E* F; u: S* i3 W  u, T2 rvim /etc/chrony.conf
' F# E' c9 B0 m, K( x& u) |$ [server xxxx  iburst# D. R! W9 p; z6 U
: e& N- `" Z: _* _
重启即可. a7 f1 P' [' c3 ?3 l+ C) _5 v

" r* U# n3 k9 R' D) O确保时间同步即可# O- V$ D$ c& W; w' s3 J1 }  ]' T

5 s3 @/ @, @6 t" \1 v% p! A2 Y; N& ?# q/ g1 o( V: z  B
四、配置host域名解析, y  g- C1 K2 P" C  z. i4 T
# r3 j+ X5 L) Y. R& S# q

8 ?7 e9 g3 _0 G! R* X172.24.110.182 kubernetes-master8 \4 w$ A; {6 H
172.24.110.183 kubernetes-node1" T* s) s7 L5 b6 a% K
172.24.110.184 kubernetes-node21 _, {2 \- \/ |* z9 I
  w! D+ V4 f1 i8 |$ ^
五、安装docker
2 |1 _9 i% q. B- G
2 S1 p$ z3 x" d( @ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo$ B; ^( p& ^9 F" d
--2024-09-17 14:44:40--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
* H8 T; l8 O' D5 C; d' J. [Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 124.95.172.94, 124.95.172.91, 124.95.153.241, ...
  ?9 n, ~4 X+ g4 D% A# ^+ W5 iConnecting to mirrors.aliyun.com (mirrors.aliyun.com)|124.95.172.94|:443... connected.
( S; [% I+ E. F# L) O7 oHTTP request sent, awaiting response... 200 OK
9 U2 h1 y& O& C6 n5 H( N9 [2 p& o  j$ PLength: 2081 (2.0K) [application/octet-stream]
; n# l7 a' {) ~2 K8 o1 I% X
0 z5 Q- U9 p" Z  v! LSaving to: ‘/etc/yum.repos.d/docker-ce.repo’. q1 e1 c& f* i/ m9 s
4 ^: `  g1 @' t. d6 A; d
100%[======================================================================================================================================================================================>] 2,081       --.-K/s   in 0s      
  l/ ^! H+ D; H, n/ q0 J
; f8 {) V2 t5 J2024-09-17 14:44:40 (122 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2081/2081]
& L6 t& i- b5 A6 T, V; z4 {7 f& K4 I0 @% Q" l" V
1 Q' h& [7 N/ {  l4 }  y- W

! M* H9 N% j- T- s- a6 k+ }" g5 p+ q6 Q( G# [

# A& j) ]3 o9 O9 V[root@kubernetes-master ~]# yum install -y docker-ce. K) p+ T" x/ B. T- j
Loaded plugins: fastestmirror, langpacks' F2 o  y% k9 l$ B! u* G
Loading mirror speeds from cached hostfile
7 ]% W& u2 _, M * base: mirrors.bfsu.edu.cn; M3 w( }6 r6 h3 c* h% q
* extras: mirrors.tuna.tsinghua.edu.cn
  s% H# `# m1 q* E * updates: mirrors.tuna.tsinghua.edu.cn2 J1 d- c2 X; G+ C) e1 R7 ]
docker-ce-stable                                                                                                                                                                                         | 3.5 kB  00:00:00     / f4 c' S! H9 K! k6 X
(1/2): docker-ce-stable/7/x86_64/updateinfo                                                                                                                                                              |   55 B  00:00:00     
' Q. e5 E2 u5 y(2/2): docker-ce-stable/7/x86_64/primary_db                                                                                                                                                              | 152 kB  00:00:00     
$ M; b6 |1 K% n; a9 q8 ~8 VResolving Dependencies) x* a2 E( u2 @! G
--> Running transaction check$ l3 B* R% K1 k9 B; w( e( J
---> Package docker-ce.x86_64 3:26.1.4-1.el7 will be installed
1 D/ O" R6 u9 e7 T6 w& Z% m4 R9 O--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-26.1.4-1.el7.x86_64
3 L& }2 g7 }9 X2 n% g3 h9 v, d0 Z+ c--> Processing Dependency: containerd.io >= 1.6.24 for package: 3:docker-ce-26.1.4-1.el7.x86_64
, P4 M) i( x8 T& z/ U--> Processing Dependency: docker-ce-cli for package: 3:docker-ce-26.1.4-1.el7.x86_64; W2 w) {9 A& p4 Y
--> Processing Dependency: docker-ce-rootless-extras for package: 3:docker-ce-26.1.4-1.el7.x86_643 k' R, W$ e' Q( c
--> Running transaction check
8 |+ G- r& ~$ n, E---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed0 f; V. l8 M1 V
---> Package containerd.io.x86_64 0:1.6.33-3.1.el7 will be installed8 ^7 l5 m4 e" B+ M
---> Package docker-ce-cli.x86_64 1:26.1.4-1.el7 will be installed
8 b( Q* p( s/ s--> Processing Dependency: docker-buildx-plugin for package: 1:docker-ce-cli-26.1.4-1.el7.x86_64. E6 L3 z) o: I- ?' Z7 H: j
--> Processing Dependency: docker-compose-plugin for package: 1:docker-ce-cli-26.1.4-1.el7.x86_64% m. c: O( k) ~
---> Package docker-ce-rootless-extras.x86_64 0:26.1.4-1.el7 will be installed! P; @& U" l& ]% _( l3 w
--> Processing Dependency: fuse-overlayfs >= 0.7 for package: docker-ce-rootless-extras-26.1.4-1.el7.x86_64
  M9 @# b! {" B--> Processing Dependency: slirp4netns >= 0.4 for package: docker-ce-rootless-extras-26.1.4-1.el7.x86_64
' P- k* ~" b" V--> Running transaction check* v" [. a: j) k1 b
---> Package docker-buildx-plugin.x86_64 0:0.14.1-1.el7 will be installed" z- d" ^' g5 w, A# a4 P
---> Package docker-compose-plugin.x86_64 0:2.27.1-1.el7 will be installed
- }6 S* B& ]7 [, u---> Package fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 will be installed
, \! `: P& x; Y: I--> Processing Dependency: libfuse3.so.3(FUSE_3.2)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_640 N8 J9 g/ M5 _; X5 X2 @0 C7 w
--> Processing Dependency: libfuse3.so.3(FUSE_3.0)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64: e: R. T) w! j+ }. J6 c- f0 @' r
--> Processing Dependency: libfuse3.so.3()(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
8 {# ^% s! f7 q( j1 t---> Package slirp4netns.x86_64 0:0.4.3-4.el7_8 will be installed! }, E6 M; ?- z8 O  q) L+ T' s
--> Running transaction check
; H' I! V" ?  H---> Package fuse3-libs.x86_64 0:3.6.1-4.el7 will be installed
; }$ i3 m8 B- y+ F- n--> Finished Dependency Resolution! V. [5 k: k! X
% T' E4 |& O& `3 O' W( t( {. Z- [1 V
Dependencies Resolved7 a  F, S6 k6 c" F
$ w8 I) ]2 e. f3 k- E; t& z# H* a
================================================================================================================================================================================================================================
+ j" T% m* r& C1 [. V7 ? Package                                                      Arch                                      Version                                                       Repository                                           Size) y% ~& A5 n- v, e. p1 N# g
================================================================================================================================================================================================================================
2 \& V+ V4 a# fInstalling:
0 J  Z# @' M; Z( q  p: J docker-ce                                                    x86_64                                    3:26.1.4-1.el7                                                docker-ce-stable                                     27 M
* p/ N+ B' A( I+ N' q, ~Installing for dependencies:
/ e3 k& U) A6 i7 P# w" u" |! R container-selinux                                            noarch                                    2:2.119.2-1.911c772.el7_8                                     extras                                               40 k
7 K) W' Y+ C. }2 m containerd.io                                                x86_64                                    1.6.33-3.1.el7                                                docker-ce-stable                                     35 M
5 p; q- P& \2 x1 c6 T6 [ docker-buildx-plugin                                         x86_64                                    0.14.1-1.el7                                                  docker-ce-stable                                     14 M4 A9 v3 @  k" k' W
docker-ce-cli                                                x86_64                                    1:26.1.4-1.el7                                                docker-ce-stable                                     15 M+ ?4 C  H( g: `
docker-ce-rootless-extras                                    x86_64                                    26.1.4-1.el7                                                  docker-ce-stable                                    9.4 M0 k$ E1 R/ A; _2 k* y- l
docker-compose-plugin                                        x86_64                                    2.27.1-1.el7                                                  docker-ce-stable                                     13 M
& |5 n5 @+ C% R1 z' Z* R3 d. ?8 `. \ fuse-overlayfs                                               x86_64                                    0.7.2-6.el7_8                                                 extras                                               54 k
0 P* V% p" ^- A  a& i( G9 b fuse3-libs                                                   x86_64                                    3.6.1-4.el7                                                   extras                                               82 k# w; t; {7 b6 p
slirp4netns                                                  x86_64                                    0.4.3-4.el7_8                                                 extras                                               81 k
+ I6 ^/ J( b: G/ K. N+ l0 a; [) B& ]1 M) D! ?+ G2 b, l
Transaction Summary
; d& D3 n/ M% Y================================================================================================================================================================================================================================, o1 a# O+ d8 b+ ^0 L
Install  1 Package (+9 Dependent packages)
2 z# I* z8 t# U5 {& p0 j) ]
; S; G% _8 m+ C: a* ?% n# b3 XTotal download size: 114 M; Z* C6 C! k9 `
Installed size: 401 M9 P( ^7 u' h# l( n7 B5 n
Downloading packages:! o& t+ x8 k1 h
(1/10): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm                                                                                                                                             |  40 kB  00:00:00     
; g: m, i+ Q- l, G& f3 g; o  Uwarning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/docker-buildx-plugin-0.14.1-1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY                               ] 3.1 MB/s |  26 MB  00:00:28 ETA
4 W% U2 X: D5 |4 _$ {( h! vPublic key for docker-buildx-plugin-0.14.1-1.el7.x86_64.rpm is not installed) T" L# E; D1 ]5 N
(2/10): docker-buildx-plugin-0.14.1-1.el7.x86_64.rpm                                                                                                                                                     |  14 MB  00:00:07     , w) l# J+ ~- ^* p  r5 }
(3/10): containerd.io-1.6.33-3.1.el7.x86_64.rpm                                                                                                                                                          |  35 MB  00:00:19     
, P0 W6 k+ i+ g% w) F(4/10): docker-ce-26.1.4-1.el7.x86_64.rpm                                                                                                                                                                |  27 MB  00:00:14     ! O$ x! r5 ^- t+ X
(5/10): docker-ce-cli-26.1.4-1.el7.x86_64.rpm                                                                                                                                                            |  15 MB  00:00:07     
1 [/ m6 f6 Y- j7 N& E6 Q& t(6/10): docker-ce-rootless-extras-26.1.4-1.el7.x86_64.rpm                                                                                                                                                | 9.4 MB  00:00:04     
. ~+ [$ H0 T9 \' M' J) R9 ?1 x( F4 v(7/10): fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm                                                                                                                                                          |  54 kB  00:00:00     ) n+ W3 s& s. O$ z
(8/10): fuse3-libs-3.6.1-4.el7.x86_64.rpm                                                                                                                                                                |  82 kB  00:00:00     $ t: h! K" f  J3 n* e3 {
(9/10): slirp4netns-0.4.3-4.el7_8.x86_64.rpm                                                                                                                                                             |  81 kB  00:00:00     , G' K8 ^( A, f; }, p
(10/10): docker-compose-plugin-2.27.1-1.el7.x86_64.rpm                                                                                                                                                   |  13 MB  00:00:03     
/ y0 q+ J' N4 i+ d) m9 P- k; Z--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6 Q) Q, ^3 t7 m- e+ m, @Total                                                                                                                                                                                           3.9 MB/s | 114 MB  00:00:29     , d/ G5 s0 F- S; i+ w
Retrieving key from https://mirrors.aliyun.com/docker-ce/linux/centos/gpg$ Y, L1 l: k: |+ `
Importing GPG key 0x621E9F35:
  c6 M8 d3 ~, q" B. p; T Userid     : "Docker Release (CE rpm) <docker@docker.com>"9 e, v& f" ~7 ^8 a8 g
Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
, q7 N8 {, g7 P% Q1 F From       : https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
$ V* ]+ b% t) D9 P2 g; IRunning transaction check
" Q' W& E( t7 x- P- N& Q$ KRunning transaction test- T+ [3 [3 W4 b: A+ J
Transaction test succeeded
( r+ J8 n  W  y: h9 W" }Running transaction' G  m- Z( I. z
  Installing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                                                                                                          1/10
3 G8 x' _+ F" gsetsebool:  SELinux is disabled." E* L7 l2 Y7 S: i9 F) w
  Installing : containerd.io-1.6.33-3.1.el7.x86_64                                                                                                                                                                         2/10 1 [! z; }5 i& B( Y& k: c
  Installing : docker-buildx-plugin-0.14.1-1.el7.x86_64                                                                                                                                                                    3/10   w+ d1 I' E3 ?% w* L+ J5 y! C
  Installing : slirp4netns-0.4.3-4.el7_8.x86_64                                                                                                                                                                            4/10
! _9 i: {1 J% {9 {# b4 `! Q8 B  Installing : fuse3-libs-3.6.1-4.el7.x86_64                                                                                                                                                                               5/10 7 A% r7 `6 t) s0 _
  Installing : fuse-overlayfs-0.7.2-6.el7_8.x86_64                                                                                                                                                                         6/10 - l: D" G. V6 r4 a+ w& I
  Installing : docker-compose-plugin-2.27.1-1.el7.x86_64                                                                                                                                                                   7/10 ( r4 d# h/ `) D5 v% W
  Installing : 1:docker-ce-cli-26.1.4-1.el7.x86_64                                                                                                                                                                         8/10
7 j3 R1 B+ w/ B! ?1 r  Installing : docker-ce-rootless-extras-26.1.4-1.el7.x86_64                                                                                                                                                               9/10
1 A& [( H! w  m# ?" D  Installing : 3:docker-ce-26.1.4-1.el7.x86_64                                                                                                                                                                            10/10 4 [( O6 L( A; k
  Verifying  : docker-compose-plugin-2.27.1-1.el7.x86_64                                                                                                                                                                   1/10 ; T1 a$ @2 A# b" Z" E
  Verifying  : fuse3-libs-3.6.1-4.el7.x86_64                                                                                                                                                                               2/10 ) X% e2 X: m% {4 R" g
  Verifying  : fuse-overlayfs-0.7.2-6.el7_8.x86_64                                                                                                                                                                         3/10
0 H+ F( n; f1 @, M  Verifying  : slirp4netns-0.4.3-4.el7_8.x86_64                                                                                                                                                                            4/10 8 T+ ^0 C* u7 L' I* M) ~
  Verifying  : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                                                                                                          5/10
( X6 [: ~4 ?4 O% i  Verifying  : containerd.io-1.6.33-3.1.el7.x86_64                                                                                                                                                                         6/10
. T% P. y( |0 C2 R5 E8 S  Verifying  : 3:docker-ce-26.1.4-1.el7.x86_64                                                                                                                                                                             7/10
3 f. ^) Y3 W6 V* I  Q  Verifying  : 1:docker-ce-cli-26.1.4-1.el7.x86_64                                                                                                                                                                         8/10
) N! I% E0 i0 k/ J6 T) M  Verifying  : docker-ce-rootless-extras-26.1.4-1.el7.x86_64                                                                                                                                                               9/10
$ H1 p: M" I2 I% j) G4 o& r  Verifying  : docker-buildx-plugin-0.14.1-1.el7.x86_64                                                                                                                                                                   10/10
$ d8 k" @! P2 z/ W3 ]( X6 e3 Z, J. b% F' d
Installed:
( G  P$ O9 x7 r  F  docker-ce.x86_64 3:26.1.4-1.el7                                                                                                                                                                                               : ^2 W4 Y3 E  l* l9 k! Q2 k9 ~

4 N8 U9 l& h: s: S3 zDependency Installed:+ O) |3 S5 }7 _& @! \
  container-selinux.noarch 2:2.119.2-1.911c772.el7_8  containerd.io.x86_64 0:1.6.33-3.1.el7  docker-buildx-plugin.x86_64 0:0.14.1-1.el7  docker-ce-cli.x86_64 1:26.1.4-1.el7  docker-ce-rootless-extras.x86_64 0:26.1.4-1.el7
9 w/ e% u8 c) U! M5 a& h& \1 {  docker-compose-plugin.x86_64 0:2.27.1-1.el7         fuse-overlayfs.x86_64 0:0.7.2-6.el7_8  fuse3-libs.x86_64 0:3.6.1-4.el7             slirp4netns.x86_64 0:0.4.3-4.el7_8  
1 ]4 y2 [# ]: T! Y5 @* o1 W; d( u. Q1 i, |5 J4 x8 k9 |& Q- ^
Complete!: e& R7 w2 ^  r
[root@kubernetes-master ~]# systemctl enable docker.service ;systemctl start docker.service9 P- C3 o* [8 T  S; k" o$ R7 b& ~% A
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.( t! u$ T- i1 C" L9 r! X3 j
[root@kubernetes-master ~]# cat > /etc/docker/daemon.json  <<EOF/ o9 c+ ~8 z; U6 \
{7 `# V4 @2 @) N! M+ T  d
   "registry-mirrors": ["https://q9n10oke.mirror.aliyuncs.com","https://registry.docker-cn.com","http://hub-mirror.c.163.com","https://docker.m.daocloud.io"],9 ^$ M  V  g5 `7 m$ v) C
   "insecure-registries": ["8.141.94.237:5000"]9 T8 o! S* N  V! F4 I7 h
}
3 [2 t. w5 Q# b! H) x! YEOF  y- I$ C! E( y8 M0 b
[root@kubernetes-master ~]# vim /etc/docker/daemon.json $ H1 Q0 I6 c" J/ n& [/ Q5 |4 F
[root@kubernetes-master ~]# systemctl restart docker.s
9 F7 @4 R% A+ c/ P  L+ O+ {docker.service  docker.socket   6 d* _: P2 \& \% _
[root@kubernetes-master ~]# systemctl restart docker.service * H. f0 ^" I& s) U, y4 L4 ^# X- P
[root@kubernetes-master ~]# docker info
, U" r: r# \, L$ ?' @3 ]+ H6 N& ZClient: Docker Engine - Community! X0 `( l4 q; I8 f, k' u. e, J. C6 t
Version:    26.1.4
8 C5 P5 g8 f3 d5 _8 { Context:    default
' Y0 a7 R' c; w- G Debug Mode: false
7 W4 m1 x0 E9 d Plugins:0 `; i  A1 O- n$ K" Y( }, C
  buildx: Docker Buildx (Docker Inc.)
, }  B  O/ \5 W* l, `' y    Version:  v0.14.1
8 |' N* x# m: j; `6 _    Path:     /usr/libexec/docker/cli-plugins/docker-buildx) W$ t9 n2 B, G
  compose: Docker Compose (Docker Inc.)
1 j4 Z8 d" d* p8 j# ^2 y0 X    Version:  v2.27.1% d( e4 {4 r2 i* r' Y' j
    Path:     /usr/libexec/docker/cli-plugins/docker-compose2 i& o  C7 q3 J9 ^9 P
! F9 S( B9 c% T  t# F0 w2 z
Server:# x& @- D- s; {6 ^- E
Containers: 09 Z& l0 P+ F2 m6 y6 z+ l
  Running: 0' G0 U$ ]9 h9 a  h- p  x( k
  Paused: 0; `* f6 {; \" \7 @6 O0 u
  Stopped: 09 @$ Z4 Y$ ^% v5 ~
Images: 02 O, Z6 ~; A+ X. U3 i( d7 w
Server Version: 26.1.4
9 `  O! u: u) N: ~. n# |( ^9 p Storage Driver: overlay25 ~9 J/ E1 ~9 B% U2 u& a% F
  Backing Filesystem: xfs
. u9 A* }# H( x, b- t$ v5 o  Supports d_type: true& [$ H9 X8 ^4 }5 m
  Using metacopy: false9 f+ _/ U1 G  Z' K
  Native Overlay Diff: true# G4 B3 [4 C. O/ {8 {. @6 T' c
  userxattr: false
0 J! ^. X3 J' c# B- U# ]3 H; h Logging Driver: json-file
0 c; d4 _: c4 U1 A9 p8 z/ ?6 Q Cgroup Driver: cgroupfs
$ D# @+ X8 t0 l% A Cgroup Version: 1
3 O1 k- o+ }7 o+ z+ f Plugins:' V! _  x* G5 {% b2 R
  Volume: local
: r- @6 p7 o0 H& [% R: @4 J1 R  Network: bridge host ipvlan macvlan null overlay
, X0 s3 J7 N6 G9 C, o  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog( X- [# n+ o* |; \3 N
Swarm: inactive/ g- l- |- ]5 i3 ?& o5 R) ?
Runtimes: io.containerd.runc.v2 runc' g; D6 n& e1 g( O5 V
Default Runtime: runc. g/ w" }% q4 G+ E
Init Binary: docker-init
, v7 r$ e/ n: b* i1 J" d( q; O containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957
6 F4 I6 b. t9 e6 l5 D runc version: v1.1.12-0-g51d5e94
/ E/ S9 e" e* o3 W init version: de40ad0$ V8 {) C& @  y& j
Security Options:( T$ V. U0 \' h3 t+ b% _% F) T; w5 M
  seccomp
* @# B5 w4 D: q, B* x/ A3 N/ q( o   Profile: builtin
9 [0 w! v7 G9 @% U& _+ h Kernel Version: 3.10.0-1160.24.1.el7.x86_64
! a3 c' A# K+ `  G+ Z Operating System: CentOS Linux 7 (Core)
5 R1 C" T$ G4 W% J4 a; V OSType: linux9 n# a- ~, i0 `& k$ I0 Q
Architecture: x86_646 |# m4 x2 e  S  Q* k: k, m3 @- }
CPUs: 4, a. }+ Q$ J( i' D3 P- m* e  Y# I
Total Memory: 3.7GiB
  m" k- _* z% E0 n" N, ^ Name: kubernetes-master1 z" J& F6 n$ f5 h: S
ID: 7a997224-186c-4ccb-a45b-e0f1ed3e65e3
7 {# Q+ k3 C: o; i% t; [: n Docker Root Dir: /var/lib/docker4 L1 T/ T4 d+ X( k% @. s9 \8 z
Debug Mode: false
8 y, R, \: W3 ] Experimental: false. K" \5 T- _9 I+ C! w( H
Insecure Registries:% \& [$ C6 C  A
  8.141.94.237:5000
6 {- S5 G1 J; z7 G. H  127.0.0.0/8
$ J9 n8 J9 q6 } Registry Mirrors:
2 y+ V% m2 j8 r* e. v, G  https://q9n10oke.mirror.aliyuncs.com/9 _! f5 J% W1 a
  https://registry.docker-cn.com/9 ~# S# n2 I! Y  B" l- e
  http://hub-mirror.c.163.com/
" {2 k7 v' c+ a; D3 j/ d  https://docker.m.daocloud.io/
* M4 G0 W* k0 J0 t Live Restore Enabled: false
: ]7 A& {+ `$ b% x" v, n$ i
* T6 S& S) S9 k3 i) ~; n8 _! ?+ J) b- x, p5 j) Q0 V
& {) Z6 u# E3 A! j
node节点也同样方式安装,步骤略。. h* I5 [7 ]! d  [" i
1 _2 H0 D% _1 n! _5 u
* ]' \7 W* D5 y; L- d( K- v* T
六、安装cri-dockerd (Docker与kubernetes通信的中间程序)所有节点都安装:2 D1 \. J1 }+ u6 X9 x8 Z, {! a
8 r! l3 a$ @) Z- w' i
5 F/ N$ M# p5 D" S% o  Y; W
& m$ v' Q( c" f3 c- [. ^3 G

! X( ]( W) b) T6 i9 {+ H+ d& \& H8 `# wget https://github.com/Mirantis/cri- ... .2-3.el7.x86_64.rpm- }9 }# }9 p. r
--2024-09-17 15:04:04--  https://github.com/Mirantis/cri- ... .2-3.el7.x86_64.rpm
+ M1 R! a; m) |8 i" JResolving github.com (github.com)... 20.205.243.166, a5 ^2 z& T7 _* |
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
. E& v- n6 ^# p4 [3 J8 r2 WHTTP request sent, awaiting response... 302 Found
' P; t1 F$ I$ ~, n" D' r7 K9 mLocation: https://objects.githubuserconten ... tion%2Foctet-stream [following]! m+ k2 ]  q6 L) N; g0 A
--2024-09-17 15:04:05--  https://objects.githubuserconten ... tion%2Foctet-stream3 w  F  M8 E8 ^& u* z
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ...
" ^% e0 d  g" TConnecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.5 Y4 K  N7 v& q" V7 e; W- n! u! Q
HTTP request sent, awaiting response... 200 OK
3 K7 d; I  R% d: Y+ ]Length: 9642368 (9.2M) [application/octet-stream]
: t) @) O7 t3 l; qSaving to: ‘cri-dockerd-0.3.2-3.el7.x86_64.rpm’
+ g- Q' s+ P0 I7 ^% Y9 L6 j& W% @+ a7 Z& }" B
100%[======================================================================================================================================================================================>] 9,642,368   7.33MB/s   in 1.3s   
* g5 H5 E" A: e1 x! |% I
4 o  I$ K8 H9 ]/ w2024-09-17 15:04:07 (7.33 MB/s) - ‘cri-dockerd-0.3.2-3.el7.x86_64.rpm’ saved [9642368/9642368]0 Z# ^0 B9 Q7 K+ r6 J' [

2 _$ e, ^- i) E; x& C" n& P) k1 V: v9 C. U7 {9 x

* w5 a; o9 _" a0 G& u  J安装:
( O' u9 u* t! j% E4 n$ P" B8 F" l
rpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm
/ j) ~# F& e$ V, k9 dPreparing...                          ################################# [100%]6 M2 J6 q6 S- g3 H. d; [
Updating / installing...( q" J! H/ b7 t
   1:cri-dockerd-3:0.3.2-3.el7        ################################# [100%]
+ C* O3 b* K8 }3 l8 V+ I% S2 i7 C  S4 {7 t* ^

, ?: I8 @! q4 L' {. u配置参数:/ o: m! L' c; B3 t: \
指定docker依赖镜像地址为国内镜像地址:
2 O( ]  l) T2 K( t/ y8 i# T. M7 p
6 j) N9 ~0 G1 ~. W5 y6 B& Bvim /usr/lib/systemd/system/cri-docker.service+ ?0 x6 ]2 d9 u9 l

  Q7 X4 ?3 [1 e' M% t9 PExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.90 V& k" M+ X, x6 W

& Z: c& V; ]6 b4 \
0 W& A( O2 S7 l# [; S9 p: L
' Z' T" M5 L; E$ L9 |) \系统默认systemd重新加载:7 d0 ~) p7 z3 [7 i7 s9 _3 _
. a5 i5 D) k& J) U
systemctl daemon-reload
2 ~; [$ l# d" _+ O. G) e* Y* q: M3 _+ R6 |' q
添加开机启动,并启动cri-docker 服务:0 u, P7 j+ r3 z7 y5 ^' @

' W$ t5 M) n' w; F1 Hsystemctl enable cri-docker.service && systemctl start cri-docker.service# ~+ ?" Z2 r% {/ v
Created symlink from /etc/systemd/system/multi-user.target.wants/cri-docker.service to /usr/lib/systemd/system/cri-docker.service.
# w$ G, l: Y) `% z, h# h/ p- H" m. b! i3 ]* V
4 ]# o$ L/ w8 }4 C- ~
七、部署kubernetes
$ [, [4 l- ?( U$ S: s, Q: `, q3 [* l1 V  c0 _+ o
# O( ^5 j% I# @" g9 i

0 a: b% P9 `. W6 E/ a7 _( g0 x: T$ \0 X
kubeadm
0 [5 u1 j9 j8 f  {" h  X/ `/ a, n" O$ J; ]; c- {9 y/ x9 B
kubeadm 是官方社区推出的一个用于快速部署kubernetes 集群的工具,这个工具通过kubeadm init和kubeadm join两条指令快速搭建kubernetes集群。
) u7 R' N0 n  p0 I* t- s: g
/ N6 R. l, B; |3 ?4 qkubectl' J6 O* C# M! Z- K" w/ {/ x, E

; V* K* W# I3 @kubectl 是kubernetes集群的命令行管理工具,除此之外还可以通过kubernetes-dashboard管理kubernetes集群。
6 p) T2 `% J9 k0 q# ^8 y! H
) y7 y/ h5 c& ?2 B4 v, qkubelet- n# B5 r" d# O
  @8 S5 A9 M$ b) @$ V
Kubelet 是 Master 节点安插在 Node 节点上的“眼线”,kubernetes通过kubelet来管理worker节点;在 Kubernetes 集群中,在每个 Node 上都会启动一个 kubelet 服务进程。该进程用于处理 Master 下发到本节点的任务,定期向 Master 汇报节点资源的使用情况,管理 Pod 及 Pod 中的容器。% P2 j, J+ m, \  q
$ Z% A4 W0 A( c) f

( b8 y0 F& g8 {' n% }7 ]( H% Q( {
9 `  g! u! a: D! S: G) A配置yum源:2 s! n6 }1 f: A- ?' W7 }
1 o, C* H# K. O# e! ~" w
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
6 M2 O2 S6 D  ]" A# G% _; c [kubernetes]" K; f# O. h2 u' p
name=Kubernetes
% \$ l) r$ t9 @5 V& A# H: ~4 y baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
" `2 Z6 Q/ E: V" {0 i enabled=1( H: b% ?  g. g
gpgcheck=05 [, w: Y( `& f4 V
EOF  `6 C, t/ m9 [5 d* _
% a: t! @. W' _7 @0 H, N
, l- |2 l9 W' f' x7 q$ I' y
安装kubeadm、kubelet和kubectl   (所有节点): h) x$ f* p* Y/ e9 K; s9 n
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
  [" E! I, q; F  t% B9 _" @5 S3 W
# E, d/ M( G' i: r, V
Loaded plugins: fastestmirror, langpacks
, o; n% ]4 x$ P. r2 DLoading mirror speeds from cached hostfile
: a/ o& F  e1 ^9 h6 E * base: mirrors.huaweicloud.com* h4 C( z$ A! D3 h, H" ^
* extras: mirrors.tuna.tsinghua.edu.cn
/ V  a7 i# H" b * updates: mirrors.tuna.tsinghua.edu.cn. h! R3 Y7 ?; f2 T. l
Resolving Dependencies
  X# A4 v2 R0 h: q" }& K5 p--> Running transaction check, [5 x; x, n* J2 G0 `
---> Package kubeadm.x86_64 0:1.28.0-0 will be updated
6 E% b, [# V2 A; p6 S---> Package kubeadm.x86_64 0:1.28.2-0 will be an update
* ~% z  z, f* A1 A: ~5 j' a---> Package kubectl.x86_64 0:1.28.0-0 will be updated0 h) x( A: |4 g7 p# Y/ S0 s
---> Package kubectl.x86_64 0:1.28.2-0 will be an update
6 f  @+ S) O, J* S& p1 a2 X---> Package kubelet.x86_64 0:1.28.0-0 will be updated5 q# F3 Y; |0 ^& d9 v( E
---> Package kubelet.x86_64 0:1.28.2-0 will be an update
# T& C4 o7 o) i) g3 J1 `" b6 x0 {% J--> Finished Dependency Resolution5 |2 {% }. A- q; m9 P4 P) o

: O& U) \! A0 e" N2 dDependencies Resolved& r2 q* q/ ?6 Z) w

! T* N0 D; l8 B5 |; ~& _================================================================================================================================================================================================================================
) ~3 y% `4 _  n( V. f7 M+ y* J Package                                              Arch                                                Version                                                 Repository                                               Size
- z/ N4 p1 p, O$ _$ f, z================================================================================================================================================================================================================================4 W; T( I% q/ k: T
Updating:2 w) Z. _/ [& j# k* e$ _7 u
kubeadm                                              x86_64                                              1.28.2-0                                                kubernetes                                               11 M
$ T% ^1 O" G: k9 d7 h kubectl                                              x86_64                                              1.28.2-0                                                kubernetes                                               11 M6 d# @0 V0 M- l
kubelet                                              x86_64                                              1.28.2-0                                                kubernetes                                               21 M
, p7 Q1 e9 K( }5 t7 ]. o) _. {
Transaction Summary
' e7 t" S5 Q3 }& V7 a================================================================================================================================================================================================================================
# d0 y! s4 F4 K8 Z" ^* [Upgrade  3 Packages
  X4 p" g5 G9 [3 m2 _
' [; s8 {. u8 u) |. Q) e4 y% kTotal download size: 43 M( Q& L8 r. u4 @! Z9 J2 L
Downloading packages:
8 M7 @. C& b2 @/ i+ bDelta RPMs disabled because /usr/bin/applydeltarpm not installed.
2 @5 A5 G, D  N( I(1/3): a24e42254b5a14b67b58c4633d29c27370c28ed6796a80c455a65acc813ff374-kubectl-1.28.2-0.x86_64.rpm                                                                                                      |  11 MB  00:00:05     
8 V6 N6 Q! J. l2 [$ _6 U(2/3): cee73f8035d734e86f722f77f1bf4e7d643e78d36646fd000148deb8af98b61c-kubeadm-1.28.2-0.x86_64.rpm                                                                                                      |  11 MB  00:00:05     7 j# [) l1 i7 [- o9 r# J& G: M
(3/3): e1cae938e231bffa3618f5934a096bd85372ee9b1293081f5682a22fe873add8-kubelet-1.28.2-0.x86_64.rpm                                                                                                      |  21 MB  00:00:05     0 a# Z8 p/ X: L! x' e! A! H" z
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
7 C, E- w2 W; r6 ~$ l7 kTotal                                                                                                                                                                                           3.8 MB/s |  43 MB  00:00:11     4 a9 @+ W8 ^# b2 f. y# d( ^
Running transaction check
7 |6 d5 S9 n/ U8 j' B0 G  ~Running transaction test
( ]- K+ B6 ^. Z7 Z# bTransaction test succeeded  `9 K$ b& {% M* U
Running transaction6 e2 T! s% v! d# `+ \' E
  Updating   : kubelet-1.28.2-0.x86_64                                                                                                                                                                                      1/6
6 q' h3 s9 e- [+ f' z: C  Updating   : kubectl-1.28.2-0.x86_64                                                                                                                                                                                      2/6
3 m( N8 V; _8 P. c# q! B" M  \  Updating   : kubeadm-1.28.2-0.x86_64                                                                                                                                                                                      3/6 , C% T" \) h) x2 F- O* H+ R7 H
  Cleanup    : kubeadm-1.28.0-0.x86_64                                                                                                                                                                                      4/6
* D. M& w/ K. _) X  Cleanup    : kubectl-1.28.0-0.x86_64                                                                                                                                                                                      5/6
6 g' ^1 h- I' h$ B% z  Cleanup    : kubelet-1.28.0-0.x86_64                                                                                                                                                                                      6/6
5 B7 }' s% b7 ^/ H) O- Z2 }  Verifying  : kubectl-1.28.2-0.x86_64                                                                                                                                                                                      1/6
$ e: ^+ d. p- i' J( A  Verifying  : kubelet-1.28.2-0.x86_64                                                                                                                                                                                      2/6 ' B* h: m& R3 _) Y! G, b" {3 k# L
  Verifying  : kubeadm-1.28.2-0.x86_64                                                                                                                                                                                      3/6 # j. u5 l5 S5 r0 v
  Verifying  : kubectl-1.28.0-0.x86_64                                                                                                                                                                                      4/6 + L7 _8 n5 o5 f* m0 E* j
  Verifying  : kubeadm-1.28.0-0.x86_64                                                                                                                                                                                      5/6
/ F( U4 L& i% k) p, M; X3 E  Verifying  : kubelet-1.28.0-0.x86_64                                                                                                                                                                                      6/6 7 ^$ _* C' Z* b& t0 G5 C, f
& {' t0 e/ e+ h+ |' m
Updated:- v- m3 G2 `  e7 B. R! d( w, k
  kubeadm.x86_64 0:1.28.2-0                                                 kubectl.x86_64 0:1.28.2-0                                                 kubelet.x86_64 0:1.28.2-0                                                
3 c( W( P: l, n' ~& i! V
* j7 f+ _3 f7 T- PComplete!6 w! x6 P9 P  b6 D$ y0 |) D

/ @, m: T4 l; V& ]( V9 z) N8 z/ d3 Y$ a! b
# I, c: X6 I4 [- W  @/ ~3 D) [
添加开机启动:
. G7 x- b8 `: U9 w+ @
1 P6 }" R( `# N+ {1 \* Zsystemctl enable kubelet.service/ {( P9 e4 C4 q) i' ~8 n; m& A
6 t: N8 q1 l+ ~+ U: d) ?) y

3 L) A, x  i% G! ]+ n2 q8 [% j$ F9 k' h
查看需要的镜像:2 X' V& |6 _6 @+ g- q

4 S. u: E* v) d3 R! j# U# N[root@kubernetes-master ~]# kubeadm config images list$ v0 d1 N/ s2 s  F
I0918 14:09:39.041429   30436 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.282 n* Z! }& E$ V" ~) J- Q; m
registry.k8s.io/kube-apiserver:v1.28.142 m4 |9 S9 `: H
registry.k8s.io/kube-controller-manager:v1.28.14
3 ^( V' a" T0 t5 Qregistry.k8s.io/kube-scheduler:v1.28.144 x0 |* J" L% U3 P; x" q
registry.k8s.io/kube-proxy:v1.28.141 K5 e0 M8 ?3 \2 T5 l# f  A
registry.k8s.io/pause:3.9
4 k7 N! N0 o! m& m: aregistry.k8s.io/etcd:3.5.9-0
  T, R4 C1 Y; Y$ w, Hregistry.k8s.io/coredns/coredns:v1.10.1
' C7 k2 a/ |3 T( y" @& O/ \% k0 ]! V3 m! r
$ L* l0 M/ T+ w, c" I) U% _, D

9 l6 {/ J' R' u  g( \6 \+ C2 [) S8 s5 g3 o9 P' l4 q# k
八、部署集群,初始化kubernetes集群
2 |0 x* p/ ]4 I! j0 _' j! z初始化kubernetes
& U! N. u& B, c" }5 ]
" N" ~8 j. P1 A) `$ [# N( C6 L# 执行 kubeadm  init 命令
* I' e1 J% ]- W' p. u0 w
7 D! J: k- G8 g0 _6 F初始化完成后,根据提示信息,拷贝kubectl 工具认证到指定或者默认路劲( L0 x% P9 w, T1 K- S! Q
--kubernetes-version=1.28
9 S# t, l3 F! D' `. X指定要安装的 Kubernetes 版本。
5 V6 K  y8 ~. R* w4 m/ r) q" K. Q--apiserver-advertise-address=x.x.x.x
  d; L; a) ]6 i: J8 [3 c, s4 Y% n  j指定集群master节点的IP地址,即apiserver所在节点的地址,并告知其他组件、节点apiserver在哪。
7 A6 H+ M' I/ j9 n% M4 y) q--image-repository registry.aliyuncs.com/google_containers; p% g$ L$ R0 ~: A6 i4 o! }
指定用于 Kubernetes 组件的容器镜像仓库。
, C9 y/ n  X; H  N$ G--service-cidr=10.10.0.0/160 S/ {" l- Z" h5 T. A; k
指定 Kubernetes service的IP地址范围。
: u$ M3 b8 K* h3 r) p--pod-network-cidr=10.122.0.0/16
& i  Q0 |3 ~2 U# v& T" [1 B指定 Kubernetes Pod的IP地址范围。
- X5 {" d/ P: F总的来说,这个命令将初始化一个版本号为1.28的kubernetes集群,并将172.31.246.16用作master节点,同时指定service和pod的IP地址范围。
8 v% ^! q5 {9 v: o( }) m9 T4 U
+ v% h& F1 A9 }% k
  d7 j. d6 @% Z1 b' m/ b在主节点上执行初始化' H; [/ ~% L, F3 [' U& w/ o
kubeadm init --apiserver-advertise-address=172.24.110.182 --node-name=kubernetes-master  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.2 --service-cidr=100.177.100.0/12 --pod-network-cidr=100.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock
# S9 C- M' i3 ?8 ?- _9 J0 s
* _" R. J+ G! {$ s- I) e5 {示例:
7 U- n, A: y6 M" {% @
! |$ K0 Z+ [1 z[root@kubernetes-master ~]# kubeadm init --apiserver-advertise-address=172.24.110.182 --node-name=kubernetes-master  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.2 --service-cidr=100.177.100.0/12 --pod-network-cidr=100.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock - X) \: J" E" q5 `9 K0 z$ D: z
[init] Using Kubernetes version: v1.28.21 O; |7 @) D- s3 o' `! _
[preflight] Running pre-flight checks% w, t, y- D8 k  |  ]9 K: i
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'2 s! N; L8 s8 i( |9 \( U
[preflight] Pulling images required for setting up a Kubernetes cluster2 j8 z2 ]% P+ D6 E" u' K
[preflight] This might take a minute or two, depending on the speed of your internet connection
4 o; `+ K! ~' ~+ e1 H2 d3 J6 _[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
) M  t2 n1 Y9 }. J. w/ F) B1 @  x9 ^. O( C$ R% F6 T/ P
; ^! C' p  z0 `9 ~! L2 d

; F+ X( r5 S+ t: Q% A[certs] Using certificateDir folder "/etc/kubernetes/pki"
5 D: r6 `3 V3 M; b/ P- P[certs] Generating "ca" certificate and key
4 b+ ]- l' N) Y5 I[certs] Generating "apiserver" certificate and key
- M. a9 i7 I+ M: p1 w+ W5 b! }[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [100.176.0.1 172.24.110.182]2 b% z3 q. `5 F0 S
[certs] Generating "apiserver-kubelet-client" certificate and key. h! S5 v0 S6 h% k$ I; X4 z' b
[certs] Generating "front-proxy-ca" certificate and key
" |  b7 k5 I+ }# c[certs] Generating "front-proxy-client" certificate and key3 |, v, W9 m4 ~4 k  l4 `
[certs] Generating "etcd/ca" certificate and key
: `# y* p$ q; L  S/ V' ?, A[certs] Generating "etcd/server" certificate and key* P" o* }" x* r
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [172.24.110.182 127.0.0.1 ::1]
1 E6 c( h  R) _  f8 S% K[certs] Generating "etcd/peer" certificate and key! Q% j' L9 V# T1 n+ x' L
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [172.24.110.182 127.0.0.1 ::1]$ o" L$ w; H) e8 J
[certs] Generating "etcd/healthcheck-client" certificate and key) C3 o; K* J6 i0 Z1 L1 H+ Z7 d( A
[certs] Generating "apiserver-etcd-client" certificate and key
. O( q( ?6 L* ~' g[certs] Generating "sa" key and public key
% `+ [$ l) |. N- J+ e$ f; v+ O[kubeconfig] Using kubeconfig folder "/etc/kubernetes"1 v( H& O+ C' u8 ]
[kubeconfig] Writing "admin.conf" kubeconfig file
- ]8 u4 C3 c( }[kubeconfig] Writing "kubelet.conf" kubeconfig file
# h  C7 |  ^: ~3 @[kubeconfig] Writing "controller-manager.conf" kubeconfig file5 x, U( r) |6 e
[kubeconfig] Writing "scheduler.conf" kubeconfig file. y8 h: M$ T; m& g
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
3 {' `  M" y9 I[control-plane] Using manifest folder "/etc/kubernetes/manifests"
2 X& ~7 N. u" O2 O. V, [' p[control-plane] Creating static Pod manifest for "kube-apiserver"( f: c& M' g  v8 ]  m" b: D- S; p0 u
[control-plane] Creating static Pod manifest for "kube-controller-manager"6 B& f8 R, }) s3 W" U
[control-plane] Creating static Pod manifest for "kube-scheduler"
7 e: v9 d' Y7 n# o( F' `( |[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"3 v6 r2 x8 |0 \6 `/ P* O
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
6 {6 t! O* \4 M$ _3 j[kubelet-start] Starting the kubelet
# m' q$ F$ Y0 V6 b[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s$ ~6 M' T5 E; i3 i
[apiclient] All control plane components are healthy after 10.505264 seconds
, f# d0 E2 c+ j8 Y[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
3 Q" I% K) E% [6 h, R0 D" u[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster9 }& j, u- l) d" t
[upload-certs] Skipping phase. Please see --upload-certs
  v; s. i+ v/ Y8 [& m8 j[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]. X! W6 o" I2 V
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]) P# ]4 H5 ?2 ^. d7 W. I) i
[bootstrap-token] Using token: 0fqjub.taqnhr1lskcovh7d
% Q; l, J' O8 f7 A* U8 T1 z1 t[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles% W) V) G+ j. Y; r% o7 M; D
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes( K; `, u% G" X' r0 f# T) c
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
4 [' L# Q9 c5 |  ^  F[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
0 Z* e& x+ n  l' m/ P4 P2 x+ u[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster4 T7 F9 H: W( C* Z9 [
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace( P2 A0 L: x. u5 P+ H
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
, L7 q+ n' m: Z7 {+ s1 u[addons] Applied essential addon: CoreDNS
2 l; v4 b( ~/ \( e/ _  c0 \[addons] Applied essential addon: kube-proxy* y8 A7 E4 P5 ~0 b7 v7 d5 {+ ]5 C

+ Z5 k" H3 {% J: m& oYour Kubernetes control-plane has initialized successfully!
& p7 C3 B# m( h6 \% F) K. @% `- p: [1 d& ]2 v5 t8 [' Z% R
To start using your cluster, you need to run the following as a regular user:
2 N" }; Y( h$ X- @9 M5 m8 O
! L6 p$ @0 p  G. q) F* Y: u  mkdir -p $HOME/.kube
7 R1 T! n0 c  U1 Y6 b' f0 S  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
2 z: D$ J" E! z7 Z# x" N' A  A2 g* ?  sudo chown $(id -u):$(id -g) $HOME/.kube/config
7 u- b) k* A5 x& f( {% z& @
  f" z/ \3 V5 PAlternatively, if you are the root user, you can run:9 Z9 U: L# K, K; v2 y# b5 Y3 F

, ?/ L: t* U. }  export KUBECONFIG=/etc/kubernetes/admin.conf
. e. r4 d! a0 w: i" y1 o& Z* H/ W3 l
You should now deploy a pod network to the cluster.8 [3 J) b! P% b$ g! t& ]' g
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
6 |- [  w  Q. q9 y  f4 y2 J4 s* A  https://kubernetes.io/docs/conce ... inistration/addons/, b  Q3 l" H, L# Q* m" `( e
) L) L. Z0 L* I+ b- ]
Then you can join any number of worker nodes by running the following on each as root:4 \% [8 H/ `9 K1 t3 f+ ~% {
! B0 \# h: G) [" i
kubeadm join 172.24.110.182:6443 --token 0fqjub.taqnhr1lskcovh7d \" U* }& n8 I& ?3 O" `! j8 V5 W
        --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd 6 ?, j; H4 i: {) E: ^

0 x: I' c; X' @0 w3 t) Y
$ Z+ v4 P0 S$ F" p% r" Z$ p, p
% o2 a: y+ P/ C; W初始化完成。, E, R' I% i+ u8 l) A# F
! M! J. @; p7 ^

( R, Q7 Y; u' D9 b相关镜像:* K. ^4 E: X* c1 T

' J9 @" H& J% l0 c: U# docker images( ~% K* d! N* {' o  k" A5 P% n( W
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
- q$ E5 O% C- j7 m" Zregistry.aliyuncs.com/google_containers/kube-apiserver            v1.28.0   bb5e0dde9054   13 months ago   126MB4 F2 @/ S+ g: J8 {, E& p1 y$ F. g
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.0   4be79c38a4ba   13 months ago   122MB, g: p: o; f: i- k
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.0   f6f496300a2a   13 months ago   60.1MB
0 Y: v: C+ ]2 n8 n% ^registry.aliyuncs.com/google_containers/kube-proxy                v1.28.0   ea1030da44aa   13 months ago   73.1MB
% l) U% [$ U0 ^+ Yregistry.aliyuncs.com/google_containers/etcd                      3.5.9-0   73deb9a3f702   16 months ago   294MB
, D' q' a3 z" F( Dregistry.aliyuncs.com/google_containers/coredns                   v1.10.1   ead0a4a53df8   19 months ago   53.6MB
/ E3 ]) Y  f6 B/ N1 n, |3 Z# T9 D2 Cregistry.aliyuncs.com/google_containers/pause                     3.9       e6f181688397   23 months ago   744kB$ W3 F( `# A: h4 I3 W* A' f) A
2 \+ V% }; N, S- [+ L+ T7 u
) U) x& H3 a& O& [, t: v6 z6 r# C
master节点配置:' ], Z. V3 a! m; i0 \
检查kubectl版本:4 f+ C5 {, I( ]9 F9 Z
9 m5 U) D4 C/ @$ g
# kubectl version --client0 w0 @7 @3 d. m# s: |' f

! s8 E" o: r9 h; Q& x& jClient Version: v1.28.2! Z- S: q+ n8 t$ A" A$ F
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce38 O- W& S4 L0 g- g
) G, x: c( j( i7 y+ r8 w
* k/ G- ^  b9 q1 w! t

2 V) _5 ^+ m) H8 F/ [8 h+ j
' U7 q9 q, ]& z# 初始化成功后运行下面的命令
$ [2 T; Z* y" M1 O! R* r" _mkdir -p $HOME/.kube+ P8 Q' I+ C: c8 k4 U

8 j) p4 `- j3 m7 j
3 H( d& H5 o2 C$ X* Ocp -i /etc/kubernetes/admin.conf $HOME/.kube/config
5 j9 ^" F8 [1 `4 U( g& }3 x' P( \4 p9 H4 ~9 R
chown $(id -u):$(id -g) $HOME/.kube/config- s6 T- Y% V( I9 q9 R+ _, F
: ?" e1 E, r' ?) B! [
查看kubectl查看状态:
% f4 ~# b- ?6 a9 C4 m
6 L" N7 }# M2 W2 i3 c$ p[root@kubernetes-master ~]# kubectl get node6 }9 a/ l% r7 M
NAME                STATUS     ROLES           AGE   VERSION+ d) @, C" I4 [! H# k
kubernetes-master   NotReady   control-plane   19m   v1.28.2. l5 w; j0 x4 Z% b
, Z0 s- v$ l1 b. Z# h! L
2 P7 m) a8 Z5 s" M3 b4 U
注:因为网络插件还没有部署,节点会处于"NotReady"状态。) Z1 ?& \& D( G( p
+ ~: k7 L7 }) k, N* ^3 K; L  n3 h3 V
查看kubernetes依赖的镜像
9 R$ u& D' j' W# W) S* E8 z3 L' B% h1 r! z3 `
[root@kubernetes-master ~]# kubeadm config images list
) E. P2 K% O9 z" i$ c8 j) QI0917 15:50:35.949562   30410 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.28! x8 A: _/ w7 g" f% c
registry.k8s.io/kube-apiserver:v1.28.14
0 k# x* Y/ ?. i9 b& E( rregistry.k8s.io/kube-controller-manager:v1.28.14
: A: n( w$ O+ fregistry.k8s.io/kube-scheduler:v1.28.14
0 n4 g  o4 x7 D/ u; n% Bregistry.k8s.io/kube-proxy:v1.28.14, [: c1 C7 D$ Y0 J! r4 W
registry.k8s.io/pause:3.9
; g' J0 }+ w2 i! H' }9 U6 Tregistry.k8s.io/etcd:3.5.9-0/ s( N0 e7 u7 h" x2 M9 f$ S6 e
registry.k8s.io/coredns/coredns:v1.10.1
( a9 p3 `4 D* i2 ?+ U# L3 l) E, e7 R% a, ~- S5 l9 Y- V9 U3 u/ r
; p( v! O0 h1 c; `# S$ p4 q
# master节点执行 配置文件的复制(为了在node节点可以使用kubectl相关命令), d# n. q8 }0 ]. n
        scp /etc/kubernetes/admin.conf 192.168.8.190:/etc/kubernetes// X# k: S( W; Q7 _/ C( Y7 Y2 I7 N
        scp /etc/kubernetes/admin.conf 192.168.8.191:/etc/kubernetes/* Z1 N; V! R: w9 H' k
        scp /etc/kubernetes/admin.conf 192.168.8.192:/etc/kubernetes/
! R; f) E) Q, {9 g! ~* ?: J为保持权限正常,可以通过rsync的方式同步8 V$ V- s- E' q! ^( o/ p

8 x! A* q) V, y9 ~
" [* Q1 f1 v" {1 r; z* r; {
3 s- ~( [0 D- ~  N" r& p[root@kubernetes-master ~]# rsync -avzP -e 'ssh -p 22' /etc/kubernetes/admin.conf root@172.24.110.183:/etc/kubernetes/. S: \" ^, o" I6 a8 ?0 Y2 V
ssh: connect to host 172.24.110.183 port 22: Connection refused6 S) K6 y; a$ ?( W9 d* H" r
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
, }; o6 K" v4 a6 Zrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.2]
& {0 B6 \1 y+ P7 s% \% a7 a: }[root@kubernetes-master ~]# rsync -avzP -e 'ssh -p 60028' /etc/kubernetes/admin.conf root@172.24.110.183:/etc/kubernetes/5 f9 i$ Z- ^  _8 u0 P
The authenticity of host '[172.24.110.183]:60028 ([172.24.110.183]:60028)' can't be established.
0 f0 {% s% |  I* |ECDSA key fingerprint is SHA256:Tvzi0ICzurMYEPySzerkOmwk/o7XHxmABVKRigofHzg.+ o. t; y3 U2 {  f! |
ECDSA key fingerprint is MD5:f0:92:26:fd:da:d3:e4:db:be:36:b1:fe:d6:2b:65:25.
) C, N! q) k1 m8 K" `8 NAre you sure you want to continue connecting (yes/no)? yes+ l2 [" R5 b# b+ l: A9 W; f; k  w
Warning: Permanently added '[172.24.110.183]:60028' (ECDSA) to the list of known hosts.
3 w' h, Q. n5 U$ |) p/ q/ ]root@172.24.110.183's password:
1 m. a4 e- N& U' msending incremental file list( V4 M  W2 }5 t" }8 e
admin.conf5 _: L* i7 z7 X# h  B
          5,646 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=0/1)
9 t, i6 b: v4 Q+ Q. j! l6 U3 C9 r' z1 A+ ~
sent 3,920 bytes  received 35 bytes  168.30 bytes/sec* {6 z" B$ M1 A
total size is 5,646  speedup is 1.43
, O  D! M# h1 ~/ l; e& j6 Z[root@kubernetes-master ~]# rsync -avzP -e 'ssh -p 60028' /etc/kubernetes/admin.conf root@172.24.110.184:/etc/kubernetes/2 x& |/ _3 J4 n5 ^, z( ^) S- g1 T$ p3 t
The authenticity of host '[172.24.110.184]:60028 ([172.24.110.184]:60028)' can't be established.
9 S* h4 p9 g2 a0 l9 G" A7 fECDSA key fingerprint is SHA256:Tvzi0ICzurMYEPySzerkOmwk/o7XHxmABVKRigofHzg.8 [, K4 b% i$ V' l+ a
ECDSA key fingerprint is MD5:f0:92:26:fd:da:d3:e4:db:be:36:b1:fe:d6:2b:65:25." Z+ U2 a1 F/ t& O9 ?. L: \
Are you sure you want to continue connecting (yes/no)? yes. n( K7 |- [) Y" e
Warning: Permanently added '[172.24.110.184]:60028' (ECDSA) to the list of known hosts.
# g- ]  @9 J9 Mroot@172.24.110.184's password:
  B4 i( c2 _/ h0 y6 csending incremental file list
6 c0 h. w4 B( C; R( X$ _6 D) S# P4 K: xadmin.conf' T9 b. _+ l- M
          5,646 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=0/1)
+ i1 H2 c8 d! W4 U5 E) p: z4 L' ^# r+ K8 C7 ^8 g
sent 3,920 bytes  received 35 bytes  878.89 bytes/sec
/ [! q6 j  Z+ D+ |! ]9 ~4 Xtotal size is 5,646  speedup is 1.439 _' a. q9 y! K$ W% ~0 }$ s

" E$ L/ F, q8 E; [. T: q! R1 j6 }* h: q, _" r! v

. T, B$ T$ Y" G# v$ F/ `将node节点加入集群(去node节点执行):
) l9 k- {8 j, q' ?, i$ y执行上述输出命 kubeadm join 命令,将该节点加入到kubernetes集群中:
, M( a9 ?! O+ y( r3 n! k! l4 Q$ `" J; l
kubeadm join 172.24.110.182:6443 --token 0fqjub.taqnhr1lskcovh7d  --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd  --cri-socket=unix:///var/run/cri-dockerd.sock
- G0 v* z+ n; r' o" r
+ [" }  R) p5 N/ F0 T4 `重新生成token值:
  x* r. X8 w" D7 v) g# ?3 U$ C% H0 u$ c: D) N- B0 _
[root@kubernetes-master ~]# kubeadm token list
+ ?5 j: f: \' f; E0 b0 ITOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS  Z1 h, f' G! D7 B3 S5 y, j
0fqjub.taqnhr1lskcovh7d   23h         2024-09-18T07:21:06Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token5 H  w3 `2 h1 q$ }0 E& A0 r0 `* x
[root@kubernetes-master ~]# kubeadm token delete 0fqjub.taqnhr1lskcovh7d
) p4 A' B) R7 t3 P1 T0 s. Obootstrap token "0fqjub" deleted
) r  @4 a7 }8 X3 W
/ f7 l2 T1 S; s[root@kubernetes-master ~]# kubeadm token list ( k6 h4 m7 d0 ]4 O$ J  q
0 r, ?" s) `( `0 @- L) ?5 r
创建一个十年的token
3 z, L' @  Q5 b2 i9 n% k[root@kubernetes-master ~]# kubeadm token create --ttl 36500h! S9 p2 u- h! ]& x! }5 C* r
pllb0d.eyjtekjjc542k16c
9 \5 E, z+ `& ^- M创建一个5年的token
" P/ [8 z1 L8 }6 j6 r5 h# X7 W: p[root@kubernetes-master ~]# kubeadm token create --ttl 18250h
9 H* M* A2 O! U& Qgpz9o9.terifm9742ermj6e$ B8 Y! T3 H: n) y

% W: H4 W! k# Z% j创建一个永久的token
( Z1 {- s+ e! Y! L1 ^7 x) ?
' m( n8 q1 H9 d1 }: X3 A[root@kubernetes-master ~]# kubeadm token create --ttl 0
6 R2 R1 j7 g, |% v1 R. xnt8qzn.bb4tm414rnww2mt2
; |- Q; r9 d3 p, `; J$ C+ P! U& W9 V# w
+ c5 \( L) q$ m; x1 v

( O' K1 {: Q6 g  N. e删除一个token:. N. M+ f* _  C
6 K3 Y& j# C$ g
[root@kubernetes-master ~]# kubeadm token list* I+ F* M1 }# d9 N) E$ a
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS( `, a- s3 z% S
gpz9o9.terifm9742ermj6e   2y          2026-10-17T18:11:13Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
: {9 I5 U8 q3 n+ a4 r. snt8qzn.bb4tm414rnww2mt2   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
  A! i7 v% x. [  k" P/ V6 dpllb0d.eyjtekjjc542k16c   4y          2028-11-16T04:10:37Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
7 k0 v" O2 ^; E5 F) I9 X[root@kubernetes-master ~]# kubeadm token delete nt8qzn.bb4tm414rnww2mt2
" C8 j% _9 j: @% X2 ubootstrap token "nt8qzn" deleted: J0 j9 r% s; y  z% b

8 F, q) w" P* X/ M+ O& p
. A3 j. {3 h7 \: A5 }) q  q! V获取 CA 证书 Hash 值
6 w7 K  p. o9 O) K( y) G" ~& {" ?$ E7 \$ I2 Z% O. c
[root@kubernetes-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
+ d  N5 s3 ]9 G; S( N09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd4 i$ p+ r" |6 \( L9 i
& W& A, }" P) ~% W3 z/ V
- ]" G# A' E; Q6 ]! t, x  ^

- ^8 ]" Y! p, L, F& D$ l: @" a# j! G$ g* `' }* b+ G; ]; K- s
或者这样生成也可以:# d3 }7 Q# {, q
  {. c  X: t+ ^* L4 @0 ?
[root@kubernetes-master ~]# kubeadm token create --ttl 18250h --print-join-command4 T( H, W8 ^/ h$ E' h3 `$ h8 v
kubeadm join 172.24.110.182:6443 --token 1kis96.cklh7okui7j4fcr0 --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd
  J1 B( d9 x2 h4 @5 A3 U# W1 a- W8 O5 s6 {8 w. U5 V( ^

* L; r! [0 E6 d8 l& S) X3 s9 [2 U' h% h+ w0 _. J) ]5 P" v0 C
正式执行加入其他节点,按照上面的返回结果执行时添加 --cri-socket=unix:///var/run/cri-dockerd.sock  参数即可:% j- v8 w' e! d% M/ _2 N0 E' O5 ]
 kubeadm join 172.24.110.182:6443 --token 1kis96.cklh7okui7j4fcr0 --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd  --cri-socket=unix:///var/run/cri-dockerd.sock ! {# ]+ ]7 L% M4 p

) q9 z7 C4 U2 ^示例如下:, r2 B" X* B! R* ?- K+ V2 V
2 T6 M6 d7 P# z/ S4 X5 Y' _
[root@kubernetes-node1 ~]# kubeadm join 172.24.110.182:6443 --token 1kis96.cklh7okui7j4fcr0 --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd  --cri-socket=unix:///var/run/cri-dockerd.sock
' T. G- b+ I! I6 t* P; H& O[preflight] Running pre-flight checks* H+ X) V/ L: G5 p- |6 }
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'. k$ ?* H% H& G2 n& l1 S
[preflight] Reading configuration from the cluster...$ T3 i( N' p2 _) g/ r2 [: K6 N
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
' K1 f" e8 l  n0 f9 L! e[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"8 k9 l6 C7 P4 h, h0 `5 H+ @5 |
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
7 x# Z+ _! \* x" s3 P[kubelet-start] Starting the kubelet
2 R! [3 L, l# W% F9 L2 j4 x[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...! O) |4 R8 }  p% P: N0 o- h
9 P0 w! f# c/ o* e
This node has joined the cluster:
! _* D8 L3 o0 T8 X* Certificate signing request was sent to apiserver and a response was received.
$ @0 }9 Q" M: Q! G* The Kubelet was informed of the new secure connection details.
* }* t6 Y( q5 k( ?  l/ v
( v! U7 E; l! K; ]& [9 [Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
7 `. l* E) Z/ W# y* G  e& l( V0 `( D" U" W: @
其他节点一样的执行,略。" S' r3 p; s4 c  p

" \* O% ^; m" [
" [0 U' [. {# n" X7 P9 S. ]/ w检查节点运行:
- L4 e/ e: K$ f. T# B$ A* [; Q- {$ n7 ~* y
[root@kubernetes-master ~]# kubectl get nodes 3 K  W. T) ~2 c4 g: C" j+ [
NAME                STATUS     ROLES           AGE   VERSION
0 D5 Y' ]6 B7 j* g) e8 R' Lkubernetes-master   NotReady   control-plane   59m   v1.28.2
  M- H; s( I3 m# ^kubernetes-node1    NotReady   <none>          78s   v1.28.24 m' }4 p- |5 G8 z: W+ W5 n
kubernetes-node2    NotReady   <none>          69s   v1.28.2
1 }5 I- [$ a5 N8 O0 c# e
1 |+ L5 |% {- ^9 D' R6 I2 C. [/ f6 c0 o  n, v. X6 A& x. i1 |* m
注:因为网络插件还没有部署,节点会处于"NotReady"状态。. e" F+ q5 S6 T, ~* j" r  K& E
8 L$ U6 p* }( U$ D
查看pod运行状态:
: B. d% G  K/ }+ j8 {8 y: v& J! I& r! h
[root@kubernetes-master ~]# kubectl get pods -A
7 n- Z, b) q0 a4 ?NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
& w; h) m( A' p1 F1 dkube-system   coredns-66f779496c-cqf5k                    0/1     Pending   0          60m
" d! `, x* |0 h9 G' e' P3 C9 p1 G" t* a( xkube-system   coredns-66f779496c-lnxt4                    0/1     Pending   0          60m
7 b2 R9 L, U* O3 s9 P0 D; qkube-system   etcd-kubernetes-master                      1/1     Running   0          60m3 b+ d# e* R5 a3 z6 _
kube-system   kube-apiserver-kubernetes-master            1/1     Running   0          60m/ c; Y" S0 W0 Z) {0 R7 ~. Z( l, R
kube-system   kube-controller-manager-kubernetes-master   1/1     Running   0          60m
5 ?; q: Q/ E' ~3 ^kube-system   kube-proxy-676dx                            1/1     Running   0          2m37s
8 j4 g$ ]3 b2 s0 R) b/ dkube-system   kube-proxy-kkt8g                            1/1     Running   0          60m) k3 U' [7 K4 @/ h) V
kube-system   kube-proxy-qgpbt                            1/1     Running   0          2m46s, s& ~# O, I, ]. J7 P  V! V
kube-system   kube-scheduler-kubernetes-master            1/1     Running   0          60m1 g. S9 K: o5 b6 _+ U& j2 H

8 F, O" p0 j1 ^: y+ P( d4 {- W0 q8 P7 k% j+ }
安装网络组件:, \3 J3 h: p8 V
9 t* E7 {% `. X6 I
不建议安装kube-flannel的,安装calico.yaml
7 j7 x- |4 {: O( X9 M) C$ ^# c; O+ n1 P; t+ f$ S9 h3 H* d
[root@kubernetes-master ~]# wget https://github.com/flannel-io/fl ... .2/kube-flannel.yml7 M4 u/ W/ T5 Q& Z& _* B
--2024-09-17 16:27:41--  https://github.com/flannel-io/fl ... .2/kube-flannel.yml
# X" J/ l$ C) ?% d+ L( I8 b6 tResolving github.com (github.com)... 20.205.243.1661 U: j' _1 ?: T
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
% Y; [* _# }+ f% ~: G& P# L. d. UHTTP request sent, awaiting response... 302 Found$ ~0 f, W7 h* k4 p
Location: https://objects.githubuserconten ... tion%2Foctet-stream [following]
# |7 {$ i" R- c7 c( ]( m--2024-09-17 16:27:42--  https://objects.githubuserconten ... tion%2Foctet-stream4 ~6 T- j5 \! y1 V' z  _
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ...% q) s# i3 p/ ]9 ~) }7 {& x' T8 P, @
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.6 }  g" B; k7 c5 e6 l3 |5 ~& u; v5 O
HTTP request sent, awaiting response... 200 OK. e5 o+ V% N! P
Length: 4459 (4.4K) [application/octet-stream]
2 H. k. P& X3 b$ x- D4 H" QSaving to: ‘kube-flannel.yml’
! m- t6 l8 ?" l, L* v0 Y1 A/ @( X, h1 Z! J! u, z2 I
100%[======================================================================================================================================================================================>] 4,459       --.-K/s   in 0s      
# {6 G( L- E  r( G3 q0 b
9 i+ h5 M' K: o+ E; ]+ y2024-09-17 16:27:42 (17.4 MB/s) - ‘kube-flannel.yml’ saved [4459/4459]' z* ^( O6 F0 }9 H$ G8 s( f

+ C' n" g5 _0 }3 n6 w8 b- q1 u$ J+ j; @1 p7 |6 T. R) K* Y6 B

- V1 @7 b9 D5 F1 @' ~) v执行安装(master节点)$ P: j% L9 X9 h; R' e. H& c
/ @6 b6 ?- A/ A/ o1 Z

0 u; F8 e% O' L[root@kubernetes-master ~]#  kubectl apply -f calico.yaml. v6 _& @1 s( R, y2 M. Z' T8 \0 e" y
namespace/kube-flannel created1 V: O$ A' o/ j5 s% W3 N
serviceaccount/flannel created
  M3 u  {0 V! Vclusterrole.rbac.authorization.k8s.io/flannel created8 g" V0 T& F. Y" o$ t
clusterrolebinding.rbac.authorization.k8s.io/flannel created3 B2 w% O8 y$ t7 _# ^& e
configmap/kube-flannel-cfg created
: |4 y2 w0 E* z4 v( a( mdaemonset.apps/kube-flannel-ds created
9 r& j# Z8 T7 _2 I, J0 h* u9 D: ][root@kubernetes-master ~]# ) t. p2 @( i9 C/ u

& y: F. _1 \$ C8 g) v: L3 y
: z+ z$ t3 G& W- i) x7 y5 q4 R5 a0 H" Z4 z# c  W1 Z/ M
再次检查pod状态/ g4 E" ^$ Z; \3 K

% I8 j: G) L' {' v[root@kubernetes-master ~]# kubeadm token list+ ?  ]2 {" k: d% s& O
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS  C2 W+ _* l4 q8 {5 m
1kis96.cklh7okui7j4fcr0   2y          2026-10-17T18:16:02Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token1 Z6 {2 w) f! B
[root@kubernetes-master ~]#  kubectl get node
- C) e. c- A% o) [4 ^NAME                STATUS   ROLES           AGE   VERSION8 ?# e) S! x" ]- R! b
kubernetes-master   Ready    control-plane   72m   v1.28.2
/ d0 D1 Q, \+ M- i" f8 O/ pkubernetes-node1    Ready    <none>          14m   v1.28.23 O$ @  c* ^* k( N* I, k
kubernetes-node2    Ready    <none>          14m   v1.28.2
5 ~; I8 u( r' _1 K, G6 T[root@kubernetes-master ~]#  kubectl get pod -A
, Q: Q. U" o* n* C7 X( F( tNAMESPACE      NAME                                        READY   STATUS              RESTARTS   AGE# R, j1 c; D5 _+ P2 y2 l$ \
kube-flannel   kube-flannel-ds-k6mpb                       0/1     Init:0/2            0          45s- v$ p$ v7 g' ^7 `+ u
kube-flannel   kube-flannel-ds-l68ft                       0/1     Init:1/2            0          45s
* b2 |4 @7 ~6 k5 S& j/ A% R* i+ ]kube-flannel   kube-flannel-ds-th9kz                       0/1     Init:1/2            0          45s
; A$ T$ b$ w# O4 B6 w) k) `6 Qkube-system    coredns-66f779496c-cqf5k                    0/1     ContainerCreating   0          72m, `! ]* z- S  Q# K! {+ K/ @9 t
kube-system    coredns-66f779496c-lnxt4                    0/1     ContainerCreating   0          72m
9 J: i+ C$ Y7 O7 L' Ikube-system    etcd-kubernetes-master                      1/1     Running             0          72m3 `$ i! ]7 Q5 _# B
kube-system    kube-apiserver-kubernetes-master            1/1     Running             0          72m8 D# A0 N5 V7 `2 n+ W( h+ m% m
kube-system    kube-controller-manager-kubernetes-master   1/1     Running             0          72m5 t8 G: o% m5 j* l- G
kube-system    kube-proxy-676dx                            1/1     Running             0          14m% P3 X8 L" ^7 E% `& U0 p: {1 O7 b
kube-system    kube-proxy-kkt8g                            1/1     Running             0          72m: f  \% k* o) f9 i7 N6 d
kube-system    kube-proxy-qgpbt                            1/1     Running             0          14m! V+ D* f' G1 v. z1 S
kube-system    kube-scheduler-kubernetes-master            1/1     Running             0          72m$ e+ g  i# v; E" @, P/ [

9 S" |6 @- o: L+ O1 a7 H. F注:: g/ b6 h( ~5 ?' e8 U/ R4 C

5 g/ I! q# f% v! y      #worker节点是无法运行kubectl命令的,因为worker节点没有admin.conf文件+ @  d( d  n( p: M
      #若需在worker节点使用kubectl命令,需要将admin.conf配置文件拷贝到worker节点,再执行以下命令:
$ ?; f+ w% C- C. W6 b. Y$ |1 y% Z     scp root@master:/etc/kubernetes/admin.conf /etc/kubernetes/. V: Q6 F' |5 m2 s. B
      echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
3 p9 N( B, M. b# B9 s- F8 _( R0 }' v% M

0 x! g; }4 {' P, D
: M- J9 S+ k/ E* w1 o3 \( Q# K安装kubernetes-dashboard(master)! s4 N+ Q; F5 G# \; U
3 q1 l) G- R0 l6 j0 Z$ ?
# c/ @& u, Y6 a- ^0 I+ W
[root@kubernetes-master ~]# wget  https://raw.githubusercontent.co ... oy/recommended.yaml
: i4 D2 Q1 v0 j. I; |7 h$ }--2024-09-18 14:34:55--  https://raw.githubusercontent.co ... oy/recommended.yaml
$ J4 Y( v& L. g& F9 s" SResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...0 Z( u3 ?* K, f" _6 L0 T
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
  L( {. E& i, jHTTP request sent, awaiting response... 200 OK
: o: e+ n0 B% t, V+ QLength: 7621 (7.4K) [text/plain]3 |0 r3 z1 a& o
Saving to: ‘recommended.yaml’
2 b9 T! f. L( U6 {' D2 O2 U3 O! p  c6 e
100%[======================================================================================================================================================================================>] 7,621       --.-K/s   in 0.001s  ) X) z  z7 j6 x

) y' Y. w; C% w; f  {3 p2024-09-18 14:34:55 (11.2 MB/s) - ‘recommended.yaml’ saved [7621/7621]6 n4 d  q9 X9 x2 Z

$ I8 c3 M1 b8 X) E[root@kubernetes-master ~]# ls
/ w/ x. T& j2 `" s" x" i% Vcalico.yaml  cri-dockerd-0.3.2-3.el7.x86_64.rpm  kube-flannel.yml  qemu-guest-agent-1.5.3-ksyun.x86_64.rpm  recommended.yaml  sudo-1.9.5-3.el7.x86_64.rpm
3 ~$ H9 p, i/ w. D0 e8 c. P
, N; G9 g- f0 o+ f+ L( e- I/ _[url=]recommended.yaml[/url]
7 @5 e& ^% s0 ?2 ^+ J- W$ d: \& s, a. ^3 _" M$ O

$ }/ v9 s' {& P# X- X0 G#编辑recommended.yaml,找到service段落,做如下修改
6 M+ K% j6 Z5 w9 X! P#在service里添加nodeport,这样可以通过 <主机ip+port> 来访问dashboard
/ ^0 U$ d! \) [4 Q4 Cvim recommended.yaml
3 x4 W# l( A) @. c5 w) W
/ I1 f5 _1 B" J7 n0 W3 J8 F) ukind: Service
8 j( c+ M% b/ ~" ?7 a4 OapiVersion: v1
& g( o; ~5 i8 V/ y; ymetadata:3 N1 ]/ o0 U# T, D! m
  labels:) ~% f. C3 m& P% y0 N
    k8s-app: kubernetes-dashboard
" L6 n: [4 M; _0 h" k- ]; Z  name: kubernetes-dashboard- {4 N' ^' g5 L( A9 s
  namespace: kubernetes-dashboard
5 s5 I8 j" u( |& I$ x" o( nspec:. Y" R$ e. ?/ l: T: g! {& g/ n
  type: NodePort  #增加此行,指定service类型为NodePort7 o$ |3 K8 Y' c. d# d: I# \1 E# \  d
  ports:
8 r0 b( g. U. X( z* v( e1 Q/ u    - port: 443
: X& d6 B# @, z9 n      targetPort: 8443* ?" M2 o5 z1 c
      nodePort: 32333   #增加此行,指定绑定的node的端口(默认的取值范围是:30000-32767), 如果不指定,会默认分配9 V( v% S  _6 C6 o& H5 M
  selector:
& ~4 _9 X$ u' n$ P$ f3 T    k8s-app: kubernetes-dashboard
8 m. }2 \  H8 d9 z, |$ K
4 y; G8 Z' U) ?! p- T+ H, d) a7 p% {) @# }
#创建danshboard4 R5 N2 j3 {, O4 ?
kubectl create -f recommended.yaml
: F9 V( y/ `5 n. k9 `
  a- w% I. c0 a5 F8 t) ^/ K/ k[root@kubernetes-master ~]# kubectl create -f recommended.yaml - z( X; J4 s4 o7 U. g
namespace/kubernetes-dashboard created
  i! d6 k4 R: M1 B1 Tserviceaccount/kubernetes-dashboard created
  b( p0 p2 K* B6 m3 Yservice/kubernetes-dashboard created
7 |: X$ X! ]1 W) Qsecret/kubernetes-dashboard-certs created
& A0 @2 ^9 C4 n0 l& ]secret/kubernetes-dashboard-csrf created
0 O, b5 j7 _2 H1 x" Z$ R9 gsecret/kubernetes-dashboard-key-holder created8 B$ V& H8 o1 R/ T: B, B1 ?5 p2 c
configmap/kubernetes-dashboard-settings created
. w" G1 i+ L6 N% O4 r' b6 D4 krole.rbac.authorization.k8s.io/kubernetes-dashboard created
  W4 U% Z/ x# ]9 P+ Y2 B0 yclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created: }  Y. j) P+ s4 a% t, t
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
, W3 D( u/ f  I/ g5 r$ R+ Q- A7 |/ `clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created3 x- G4 m2 k  ^
deployment.apps/kubernetes-dashboard created
% ^! P7 X  u0 T) i) w6 o8 wservice/dashboard-metrics-scraper created
# k8 R+ y# v2 [9 Y1 adeployment.apps/dashboard-metrics-scraper created
, _1 e) `# ^" L0 P& i; o* K' B  K) h2 x: Q+ J
6 n6 j2 X, r  J: e
: B; D7 |" k, z6 M
* v3 [" V! y) p
#查看所有pod
5 x0 B7 Y/ J/ v! D* W& C7 Ckubectl get pods --all-namespaces3 `9 x7 r8 S) n2 H1 a8 f# T
[root@kubernetes-master ~]# kubectl get pods --all-namespaces- R% Y6 x; t  P7 F9 e7 p
NAMESPACE              NAME                                         READY   STATUS              RESTARTS         AGE
6 y: v6 f* q0 z7 ykube-flannel           kube-flannel-ds-k6mpb                        0/1     CrashLoopBackOff    263 (118s ago)   22h. k% s2 N# W: |1 Q5 W8 J2 c' f7 A
kube-flannel           kube-flannel-ds-l68ft                        0/1     CrashLoopBackOff    263 (100s ago)   22h' W- p6 ~/ }8 e& F' n
kube-flannel           kube-flannel-ds-th9kz                        0/1     CrashLoopBackOff    262 (4m ago)     22h
4 N7 ^" {6 m% z7 _kube-system            calico-kube-controllers-7d64c8fdd5-c8klr     1/1     Running             0                24m7 L8 i: k; a) j( i2 _
kube-system            calico-node-574ht                            1/1     Running             0                24m% S, p1 W& X6 e* @
kube-system            calico-node-mgn28                            1/1     Running             0                24m+ J( c3 d3 Z; V% e5 [
kube-system            calico-node-nglnx                            1/1     Running             0                24m6 `: a7 _5 g% n
kube-system            coredns-66f779496c-cqf5k                     1/1     Running             0                23h
* H. m3 n" v7 C, I" |kube-system            coredns-66f779496c-lnxt4                     1/1     Running             0                23h6 w+ q, z8 m+ y4 _4 g& m
kube-system            etcd-kubernetes-master                       1/1     Running             0                23h
+ Y5 P4 ]9 ~: e3 G- Lkube-system            kube-apiserver-kubernetes-master             1/1     Running             1 (12h ago)      23h% ?6 \( F! P, d
kube-system            kube-controller-manager-kubernetes-master    1/1     Running             15               23h( d/ G* V- ]5 A! D" \% t& S
kube-system            kube-proxy-676dx                             1/1     Running             0                22h
2 U/ l% H8 ?3 ykube-system            kube-proxy-kkt8g                             1/1     Running             0                23h9 V  ^) n* W5 e; g9 A
kube-system            kube-proxy-qgpbt                             1/1     Running             0                22h: ?; ?$ a4 o& X. g) `. \( U
kube-system            kube-scheduler-kubernetes-master             1/1     Running             16               23h6 o, f! j- Z. M+ W! W( {+ |4 R
kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-bggwp   0/1     ContainerCreating   0                23s
& L7 I. D! O0 {. ~kubernetes-dashboard   kubernetes-dashboard-746fbfd67c-8xbmk        0/1     ContainerCreating   0                23s
2 k2 X. o( H7 |
* ~. V; w3 D2 b8 w' Q' b0 M检查kubernets-dashboard状态:
% C1 p- I; F9 S* d: k7 R0 D" `
  t9 Y* I% z9 Y# ]; L8 {8 z[root@kubernetes-master ~]# kubectl get pod -n kubernetes-dashboard6 P  q! O$ T% J& S$ A
NAME                                         READY   STATUS              RESTARTS   AGE
+ a, U5 _0 c6 g* Z  q9 E; Pdashboard-metrics-scraper-5657497c4c-bggwp   0/1     ImagePullBackOff    0          2m36s$ z; m5 b' ~$ A2 ]4 C/ B: w- f
kubernetes-dashboard-746fbfd67c-8xbmk        0/1     ContainerCreating   0          2m36s
4 G# n' j  a4 l' d1 {) w- C6 P1 F, y: I2 ?8 R
8 c: y) j  A5 F

' {0 E' f" Q$ t! H, a7 IImagePullBackOff问题解决方案
' P/ ^$ e% R( W4 X5 [#查看该pod的详细信息7 {- [( s( m, M( j3 \1 R9 O+ c( o

+ e& U. o# V  f) w4 O3 N  a. `" d7 E查看该pod的详细信息
' M* H) P$ p  ?! _+ Z2 s+ p# _3 L/ v/ G
[root@kubernetes-master ~]# kubectl describe pod/kubernetes-dashboard-746fbfd67c-8xbmk --namespace=kubernetes-dashboard9 H& n' H- ^' x4 |- E- `
Name:             kubernetes-dashboard-746fbfd67c-8xbmk2 O4 I/ w4 Z4 x" t$ r
Namespace:        kubernetes-dashboard
! y2 I# O* d* uPriority:         0' ^: S. ^. f! h$ O* g) u' b* S
Service Account:  kubernetes-dashboard( X& C( b; v& N
Node:             kubernetes-node1/172.24.110.183
* n5 |1 n$ l2 h# b5 d# E+ l# CStart Time:       Wed, 18 Sep 2024 14:45:16 +0800
: S' X/ x# d, Q2 N/ w* _( {Labels:           k8s-app=kubernetes-dashboard
9 W& \3 C- [) ~3 L9 ^5 g, o, U                  pod-template-hash=746fbfd67c( g& X6 U& g8 n9 e  G& z
Annotations:      cni.projectcalico.org/containerID: 7651e89375fa07f03a7594f82dc3c5a14b4fb63afb6f85006dc7f1d5464ff625
; i5 z4 f; |# N4 N9 G! g+ z                  cni.projectcalico.org/podIP: 100.233.129.65/32- ~  ^: V5 ]& r/ M2 S' u- u% ~
                  cni.projectcalico.org/podIPs: 100.233.129.65/32
/ |% r! r, v* bStatus:           Pending
6 o3 i7 N. }, uSeccompProfile:   RuntimeDefault
2 a5 T' W# x2 b# w8 b6 d: CIP:               100.233.129.65
- V: \! D* g1 J2 |* c9 Q1 A5 V' @IPs:% D& a" f, K! W3 \
  IP:           100.233.129.65
: U# [2 m  A% M. ?9 VControlled By:  ReplicaSet/kubernetes-dashboard-746fbfd67c) ?8 d( ?9 c& S9 \- }& _7 Z; m; K5 X  F
Containers:+ m* R9 _0 h+ M1 g- q( b
  kubernetes-dashboard:5 Y/ S; b$ a5 A
    Container ID:  5 R/ I& [# ~1 x5 }
    Image:         kubernetesui/dashboard:v2.6.1) d; V4 H1 X/ a# [' q
    Image ID:      ! g3 }# ]! {8 r5 z- c
    Port:          8443/TCP$ }8 E, |, ?+ h; q! a& e
    Host Port:     0/TCP* |: L# v. y4 h8 Z
    Args:
1 p* c' B) H6 o: @) w2 U      --auto-generate-certificates
0 O9 `% a5 N( E8 y. K- S) L- [" g      --namespace=kubernetes-dashboard! R  f1 C% ]3 w0 l4 p. [( q
    State:          Waiting
# Z: Q# k2 j3 M/ y# X, q      Reason:       ImagePullBackOff$ _7 W9 Z" L3 M( \3 w" K& E% a. I
    Ready:          False0 x2 x& ^, q/ h0 v" y8 A- _  B8 k
    Restart Count:  0# i& M% d% l" @5 N
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3) D9 f; J, e  _) K0 h; v
    Environment:    <none>6 C) Z) ^9 u1 u5 ^! m
    Mounts:6 z2 I) p; X4 C' ?1 I  k
      /certs from kubernetes-dashboard-certs (rw)
1 x; [/ ]8 E' r0 s6 b      /tmp from tmp-volume (rw)' v2 E6 ~) g( O0 B/ y
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9w2d (ro)7 J  W6 s2 A* H
Conditions:
5 T+ r1 D0 t+ B, x7 Y/ O  Type              Status
, M7 \% \  l/ }3 G# {: h' O8 C6 h  Initialized       True
" m* m% b" a8 K4 a2 O: c  Ready             False
  Q# E; P3 @7 J0 V5 A  ContainersReady   False
- O9 X* ]& w- C& R: q  PodScheduled      True , N1 h  N6 j/ d2 B1 S
Volumes:+ U& J; U5 B3 ]: H+ w
  kubernetes-dashboard-certs:
+ r; I1 T2 h# d; J3 A; H    Type:        Secret (a volume populated by a Secret)3 i. k$ ~( a  D+ l
    SecretName:  kubernetes-dashboard-certs$ f; t- z1 l3 t9 Q4 ], b4 V
    Optional:    false
& j( W, H' t9 b+ e8 x: A- Z  tmp-volume:8 ~$ \! F, X) d! `
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
( |% a& ]: A0 \- ]" K/ e) q    Medium:     
' Q0 _* d! B* C% @6 U4 |: h    SizeLimit:  <unset>2 p0 q4 e3 ^% j7 R
  kube-api-access-r9w2d:1 K4 k$ \9 ?% b  h! ~9 |) M
    Type:                    Projected (a volume that contains injected data from multiple sources); t% F& U6 s/ c; u' j& G& N
    TokenExpirationSeconds:  36078 k3 Z/ g- V6 n* a0 X9 f. h; e: W; ?
    ConfigMapName:           kube-root-ca.crt8 b* N# S+ Q9 y2 m
    ConfigMapOptional:       <nil>
& M3 }( w+ N  k    DownwardAPI:             true0 F0 o7 g( K4 t# O
QoS Class:                   BestEffort
5 Q7 I5 T, y, s9 @  `Node-Selectors:              kubernetes.io/os=linux. o2 J; G* D) k# W- e% M+ A- D
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
- j% s3 f! t7 D9 O* n                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s+ n" X2 Y7 ?* v' R
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s8 G4 \3 m0 S' M
Events:
9 Q) i7 T3 i  d& ~& @  Type     Reason     Age                  From               Message, _$ d  ~  }* L* j4 {4 F0 B1 Z
  ----     ------     ----                 ----               -------
4 N, F( D) u; @% E- T* e8 B  Normal   Scheduled  8m43s                default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-746fbfd67c-8xbmk to kubernetes-node1/ d& @6 n- |; S0 M( U  t
  Warning  Failed     34s (x2 over 5m55s)  kubelet            Failed to pull image "kubernetesui/dashboard:v2.6.1": rpc error: code = Canceled desc = context canceled! _2 \9 q1 N& |5 T1 i* W
  Warning  Failed     34s (x2 over 5m55s)  kubelet            Error: ErrImagePull
0 v( N9 _; a0 }3 O5 P9 z  Normal   BackOff    21s (x2 over 5m55s)  kubelet            Back-off pulling image "kubernetesui/dashboard:v2.6.1"/ e' s4 ~) L, ^+ P" I" i: U( v
  Warning  Failed     21s (x2 over 5m55s)  kubelet            Error: ImagePullBackOff. z$ I% F3 o# {8 H- X
  Normal   Pulling    6s (x3 over 8m42s)   kubelet            Pulling image "kubernetesui/dashboard:v2.6.1"- _7 E: i+ r( m

+ \2 S! V4 A- @+ J' {% {7 Q) n0 m; v

) f4 }7 E+ @# o' c, r  X
' N* T' @6 e9 K# `* B$ `1 R, z- I1 P3 b9 E/ g
通过上述命令可以得到如下信息,其中有两个重要信息:
0 U$ H# g- m3 y4 ]2 R
# R: \: {/ X/ y& Y6 G7 M) j拉取失败的镜像为:kubernetesui/dashboard:v2.6.1
6 G# a/ t. r( }- t+ ]0 D4 E; k* r% ?' L0 x$ c
手动拉取镜像:
- R1 N0 w) S9 b; U7 z
; z5 p  u; u5 n+ K/ k+ y( F[root@kubernetes-master ~]# docker pull kubernetesui/dashboard:v2.6.1) J. o. `, `3 N' l1 Z) J& ?# x
v2.6.1: Pulling from kubernetesui/dashboard0 u# S+ ^7 g$ X! h- l+ g! B
596ae5b8318a: Pulling fs layer
* s4 M  D' r) O/ j5 f! O596ae5b8318a: Pull complete 3 m) O, n, Q$ r( \- {
b721c920bca6: Pull complete
9 ?  M" x5 h& s. aDigest: sha256:290bebc3cd96c22b6f89e7b21f5c2b16ce5c275a0ec2c2de10e0d8b9dd110289
1 A( R! w/ v: J2 ?Status: Downloaded newer image for kubernetesui/dashboard:v2.6.1* O6 i8 t9 Q+ [0 T$ H
docker.io/kubernetesui/dashboard:v2.6.1
- C2 d' t, e) `: c( @5 i+ k4 ~% ~4 J# p. @" c

1 O7 o: t: V$ o9 [" b' {
7 E+ m. ~' o/ I  }#打包镜像6 x' a" e7 n+ r# A
docker save -o k8s-dashboard.tar kubernetesui/dashboard:v2.6.14 Q* F* s; `+ ]% |
# F3 B6 n- K/ L6 ?
% m3 e/ Z; T" M7 I- }8 g3 L( q6 _

2 g3 k% w) P. j1 A8 W( u' B8 |: W' D& c6 r' x" S
不再使用create ,而是使用apply更新:; ^/ a7 F/ y1 Q5 v

( P6 B3 n3 k! D7 r5 J! A[root@kubernetes-master ~]# kubectl apply -f recommended.yaml. y: Q' ^6 _! I+ _, s/ k% D
  3 j# P4 ^' A' ^
Warning: resource namespaces/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.2 J  \  A2 ]5 h, h" V& ?/ l) @1 B
namespace/kubernetes-dashboard configured3 h2 H- r9 U  ]9 R2 V
Warning: resource serviceaccounts/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
3 ?) x4 ~; |+ F: Wserviceaccount/kubernetes-dashboard configured5 }# u4 s8 B: @7 {" O
Warning: resource services/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.8 O% Y* [5 Q% c1 r  }
service/kubernetes-dashboard configured
8 p2 V4 F0 a2 r8 ~  p& ~8 bWarning: resource secrets/kubernetes-dashboard-certs is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
3 N- a1 n, i. ]1 Tsecret/kubernetes-dashboard-certs configured
6 s. z& W% m8 [7 mWarning: resource secrets/kubernetes-dashboard-csrf is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
/ ^/ a, s; N  @! Dsecret/kubernetes-dashboard-csrf configured
. F1 }0 ]- S$ {Warning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.; b( h& A5 s3 n; S; w7 P/ Z
secret/kubernetes-dashboard-key-holder configured
7 ]6 O- J9 g. CWarning: resource configmaps/kubernetes-dashboard-settings is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.1 g! \* ?, y( s4 R
configmap/kubernetes-dashboard-settings configured
1 w9 ]' T9 x8 n1 D( u4 QWarning: resource roles/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.+ B' M( ~9 V, c: t! @$ ]0 D7 ?$ }
role.rbac.authorization.k8s.io/kubernetes-dashboard configured
) K% H+ n6 e) H; z8 ~4 L/ t$ [Warning: resource clusterroles/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
+ c# [% r2 E+ e5 q2 V$ nclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard configured
: l) \2 z/ U& R4 ^$ O' r, S; zWarning: resource rolebindings/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically., b/ @5 y; q7 |- I
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard configured
! c! J" g/ K1 P" c' e9 w+ X1 b: dWarning: resource clusterrolebindings/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
$ Y7 Q! N8 {/ m: T: V9 D' L& K4 uclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard configured' M2 x7 K7 G6 E6 x: P4 M
Warning: resource deployments/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
" r3 E, @* K; n# ^% _+ [+ `deployment.apps/kubernetes-dashboard configured  Z) H3 O5 E3 q; e* C6 E
Warning: resource services/dashboard-metrics-scraper is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.0 U' {6 Z! P/ J5 V
service/dashboard-metrics-scraper configured! I( I/ D9 {# i
Warning: resource deployments/dashboard-metrics-scraper is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.7 _/ e9 b& Y' _  D, p
deployment.apps/dashboard-metrics-scraper configured7 u/ O( ~0 V; J! g7 s. O" b
8 X; L+ [9 ^: l7 d2 |# b% u: U6 y
  B4 T- s: a- T: T" S5 h" N

, [3 e4 s% W' Y3 D$ `* d) d7 W& Q1 h' ^: y9 V# j' n' S: g7 a

7 b9 k7 @- f( ~6 T9 x& }& D. l; n
# x: V- y1 h! P8 B* r: H( N. B
# {; l$ J4 Q+ h- C/ t
0 @7 s  U" ~, g+ f

recommended.yaml

7.44 KB, 下载次数: 0

 楼主| 发表于 2024-9-17 16:35:24 | 显示全部楼层
kubernetes命令补全优化5 D+ v  Y- z: l3 D
# 加入~/.bashrc
% ~# A2 R2 t" q" Bvim ~/.bashrc
- X* T* t4 @6 j$ C: G: l; S& w/ C# 添加下面的) k+ k2 \9 f% Q- w
source <(kubectl completion bash)
0 F6 [2 \( I' i3 Y0 j: N: g; t
4 S4 Q/ u, N4 F2 `3 Msource ~/.bashrc
 楼主| 发表于 2024-9-18 14:22:13 | 显示全部楼层
网络安装calico.yaml 组件:8 X6 G' b8 C6 X
[root@kubernetes-master ~]# curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O9 T# n$ {. v9 V7 E/ s! q
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
& b9 g4 X$ `$ c8 [                                 Dload  Upload   Total   Spent    Left  Speed
, W7 }/ Z4 o' R3 m' V  P100  184k  100  184k    0     0  98813      0  0:00:01  0:00:01 --:--:-- 987930 |' Z7 H; H5 N) F' j' j0 f  Y$ }
[url=]calico.yaml[/url]
! r5 J0 j3 q2 T/ ?5 [) Y# s! S1 {8 Z3 f: Q9 `
! e  m0 o2 _* w8 Q( j4 b1 [$ {
[root@kubernetes-master ~]# ls
6 R  r  q% c4 H9 ?( ucalico.yaml  cri-dockerd-0.3.2-3.el7.x86_64.rpm  kube-flannel.yml  qemu-guest-agent-1.5.3-ksyun.x86_64.rpm  sudo-1.9.5-3.el7.x86_64.rpm
# a& `5 g- m3 C* c& T1 T9 L+ m[root@kubernetes-master ~]# vim calico.yaml
2 _6 |& u3 \# f3 X5 ]3 o[root@kubernetes-master ~]# kubectl apply -f calico.yaml ( P6 f5 {# |( [% y4 Y4 t6 k; b
configmap/calico-config created2 z; p5 w- H3 e- y# j% `' j
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
8 ?9 b6 ?, O% N9 y) ~( o/ q+ ycustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created( Q5 c+ G, T: N* I1 n
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created3 s2 Z( C8 d3 e6 z2 A) a0 J
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created# I' w, B) [4 Z, ?, Z6 T
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
: E, i8 w, Q& P2 [customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created+ b7 q* A* J' {" m6 V, p
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created( D9 x8 c9 d, d* S6 |. e/ B
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created( |! a3 b- h, F' t4 W# A
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
6 |* h7 C7 E3 G; {# E, F0 Scustomresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
/ Y# R7 Q+ Q) G8 kcustomresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created) d: W8 J2 ]* ^5 o) v* [0 M
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
% \. N' \- A& C9 O0 Ncustomresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
) i' f  K  j. Jcustomresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
. m* n* M, T6 Z; D# O3 ccustomresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
! U$ r) e' L( }5 K4 rclusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
) r* q4 D0 i$ Q5 S; n0 \# u: }clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
) V$ p9 }, z# v, tclusterrole.rbac.authorization.k8s.io/calico-node created
) |' b1 w- B7 o2 z9 g6 @2 g5 vclusterrolebinding.rbac.authorization.k8s.io/calico-node created
6 X9 G1 v0 N1 x8 a( G2 P# Hdaemonset.apps/calico-node created0 {- m2 F, |* q, l' H! d
serviceaccount/calico-node created
* C, c: ]6 `$ o2 m: s1 Hdeployment.apps/calico-kube-controllers created
% J* H& M1 T$ `1 P( w5 Y. cserviceaccount/calico-kube-controllers created. H/ y$ y$ v- g$ Z
error: resource mapping not found for name: "calico-kube-controllers" namespace: "kube-system" from "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"1 i! U1 R/ B1 s  `
ensure CRDs are installed first
8 E& H1 @. e% r5 k& _- G2 @
- q- b8 A5 y' Q7 \5 w% W' ?* W6 j/ b; U( n3 `! }

calico.yaml

184.76 KB, 下载次数: 0

 楼主| 发表于 2024-9-18 15:14:52 | 显示全部楼层
删除pod ,kubernetes-dashboard
8 t# ^- u+ R% F: Fkubectl describe pod/kubernetes-dashboard-746fbfd67c-8xbmk --namespace=kubernetes-dashboard
! N& \) k) L& ?9 B! r' }& E' X$ m. D& I% O5 y; z
) i; \$ X% g, Q# n' F& S. f
kube-flannel           kube-flannel-ds-l68ft                        0/1     CrashLoopBackOff: a/ {9 k% U2 ~% R5 A" V. ~" n1 Z

& G' ?4 B  ~, Q8 H! i: x3 _$ Z解决方法:
# w9 C- p) r  \0 a1 y, p# r" p
. t# P- v( p7 e4 x- `9 \6 L检查 Pod 的事件或日志:使用 kubectl describe pod <pod-name> 查看 Pod 的事件和日志,找到启动失败的具体原因。( N/ w# H& b/ I' I0 |, X, T

2 l" ^1 ^6 [0 j0 l/ m# j3 B+ Y8 r: g& O- j1 U
kubectl describe pod kube-flannel-ds-l68ft --namespace=kube-flannel
3 v0 _7 r2 F# A1 r& \3 _0 z( LName:                 kube-flannel-ds-l68ft: H" p6 s/ V4 u
Namespace:            kube-flannel
( F! M# P/ Z0 `+ b( l  qPriority:             2000001000& U" f4 S) w0 f% Y% P
Priority Class Name:  system-node-critical
2 \0 _2 F$ C1 t4 l, h* O1 m' aService Account:      flannel0 \8 P, h( X+ t
Node:                 kubernetes-node1/172.24.110.183) ^% k' o4 L( j8 q% T1 P4 T) i
Start Time:           Tue, 17 Sep 2024 16:33:08 +0800
7 K% E1 e& i9 E4 J( m1 E7 f# TLabels:               app=flannel- D3 F, F0 q7 F$ n* A$ w# n5 F  S3 R
                      controller-revision-hash=c46b99f7f
& H0 \* V1 R  l                      k8s-app=flannel( s0 {! w! q+ C# ]- h5 X+ F
                      pod-template-generation=24 s/ v6 B" L, g0 R! k  K3 v
                      tier=node
( `6 \* U) R- O+ t) N9 ^5 r) LAnnotations:          <none>
6 f! l2 E& `. W0 G9 aStatus:               Running3 U' J2 t1 d5 E' Y; I
IP:                   172.24.110.183
2 [1 x& j/ ^$ J& `+ ]3 Z3 c8 C& H; |

  D. B% F4 t7 e6 X3 L6 ?) g9 }. y/ x% H: m# R. W
修正配置或启动命令:根据日志信息修正容器的配置文件或启动命令。
: x2 c' G3 _, w( @1 n+ X8 z7 n* I/ B
检查资源限制:使用 kubectl top pod <pod-name> 检查 Pod 是否有足够的资源运行。( b9 E- b" c8 ^9 ~# q! r6 ^4 V! ~% @

. F4 v1 L! _" V; O2 w
* T7 p9 x9 O5 \2 i8 Y4 {6 U: P5 I  f: J$ K
调整权限:确保容器以正确的用户身份运行,并且有适当的文件权限。: [1 B; [* G% X
' c: g1 b) @& {
确认依赖服务:确保所有依赖的服务都已启动并运行正常。
5 O) g. t* Q* q- L) @; b& `5 h
6 J7 E6 ?/ J- ^- B重新拉取镜像:使用 kubectl get pod -o yaml 查看 Pod 定义,确认镜像名和标签正确,然后使用 kubectl delete pod <pod-name> 强制 Pod 重新拉取镜像。2 g1 Q0 h6 b5 G$ u* u

3 L  H7 u% ]; k
0 Q9 I$ D% x6 w; L9 Tkubectl get pod -o yaml
: z7 u+ S* M! g* r; AapiVersion: v1
# s2 u2 a& F4 l3 r# i# zitems: []1 ~! \2 u' i6 B1 M3 p
kind: List) k+ i4 |8 N+ q* a$ S
metadata:
. \/ S4 W# @4 `- \! b6 a  resourceVersion: ""
7 ]. h6 ^" [! ^. m: S! a8 D0 m
4 A' D5 m+ f3 S2 t$ J
4 u" i$ Z! _$ @4 G
8 z( B# R. w1 z2 K# H- g0 u调整安全策略:根据日志信息调整相应的安全策略,以允许所需的操作。
: Q9 P  R/ I, n3 t6 p
! w' I1 [5 x& u( D7 y8 R$ {在解决问题时,可能需要多次尝试,并且每次修改后都需要重新创建 Pod 来使更改生效。* o9 w) r3 D: k6 X2 P8 Q( Y

/ P# r2 z& A/ A% Y8 {
! X& p3 `/ s; M5 m5 {2 S: }. N) I: }7 X  @9 Z" h
7 r) P- ^: ?- ^% m2 a
 楼主| 发表于 2024-9-18 16:18:16 | 显示全部楼层
dashboard上面装的有问题,下面重新安装:: ~9 D; M4 e  b5 w
7 @( [* U! i) T4 T6 m- }0 J6 {% @

: \2 `" O: n, ]$ `; V/ D8 M* T[root@kubernetes-master ~]# wget https://raw.githubusercontent.co ... oy/recommended.yaml
, r" _9 m2 q$ R; O8 t--2024-09-18 16:04:03--  https://raw.githubusercontent.co ... oy/recommended.yaml
4 r# n4 h& b: ~! VResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.109.133, .... ]# ]  ~; H$ t" Y/ P
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.: p; b. \3 w+ Z# n* ^4 B
HTTP request sent, awaiting response... 200 OK" S0 T0 T. l' v# k2 R: @
Length: 7621 (7.4K) [text/plain]
3 C" L, [6 d, R$ f# b9 aSaving to: ‘recommended.yaml’
' r( T: T6 X2 x& N% Z, l; S6 s; ]; O# V' o
100%[======================================================================================================================================================================================>] 7,621       --.-K/s   in 0.001s  4 a* z$ J1 X' b) P$ V7 @6 n) t

5 r3 c8 E# o1 `. W2024-09-18 16:04:04 (7.53 MB/s) - ‘recommended.yaml’ saved [7621/7621]
& y) w: P0 V- Y4 T% }3 `7 }. `[url=]recommended.yaml[/url]
! E. _) y- c( T( S5 K[root@kubernetes-master ~]# ls  ~  P* y( N4 C! c4 C
calico.yaml  cri-dockerd-0.3.2-3.el7.x86_64.rpm  kube-flannel.yml  qemu-guest-agent-1.5.3-ksyun.x86_64.rpm  recommended.yaml  recommended.yaml.bal  sudo-1.9.5-3.el7.x86_64.rpm" M9 R  S9 b6 _! F, n8 h
[root@kubernetes-master ~]# vim recommended.yaml
3 m6 u8 r+ c" S: M[root@kubernetes-master ~]# vim recommended.yaml
' W( K# }( B; H, I& P, b) s. G' g% N+ N

+ F1 [; R+ s1 T- Wspec:
$ x, y& \& F& Y, p type: NodePort      ##添加的
$ B# F1 E- Z$ p) s1 r  c  ports:
3 L# x3 j: M" L/ ^+ B% A    - port: 443
8 {" r0 W1 g8 I0 A) E      targetPort: 8443" L& o7 P0 \. W; \0 ^
      nodePort: 32333      ###添加的; f+ U6 t$ J9 H: y6 F- L8 U6 Q% [
  selector:+ E* K0 P0 s' n+ F* {9 v
    k8s-app: kubernetes-dashboard) q' v; N) e$ U& ~/ d7 ~

( u0 @  k" Q0 g8 B( F4 d9 Y+ V9 e: }" v& `' e

' k, n, Z) z6 f4 g5 U: g8 f0 Q' u      containers:/ p1 H: H: a0 d+ O( a' g
        - name: kubernetes-dashboard( f2 ]! W; [+ j4 k
          image: kubernetesui/dashboard:v2.5.1
/ Y. a5 Y' p- \: F: B" p          imagePullPolicy: Never    ##Always 改成Never: K$ b: L, M5 I: q
          ports:" B# E9 O% s- W* U8 V' h
            - containerPort: 8443
0 o% N& }6 v' P" D# G" `8 B              protocol: TCP4 U" l& Y% L2 g2 a

( Z; d3 b% Q, d; ?
: P7 ]: J9 D3 F* S; e, u
! u7 \9 y" e, X  n5 w7 Y9 j/ @$ ?* T- Y1 @) k
( M% ^4 [& s' J' _& O, p
#创建danshboard! D3 N: I4 _9 l0 u/ J7 T
kubectl create -f recommended.yaml
, ]+ v1 A0 f9 M+ i* y0 y/ T7 ]# A! H

" Y0 d7 b6 \; W9 E) M
9 ^! E& @: |  ]2 P  t$ I[root@kubernetes-master ~]# kubectl create -f recommended.yaml / H4 d6 N8 e4 H
namespace/kubernetes-dashboard created. w4 l7 \6 A0 F0 w7 x' ]- _7 Y
serviceaccount/kubernetes-dashboard created
  m0 d% ]0 I8 a1 W. T/ F6 oservice/kubernetes-dashboard created
) _/ I) o, w6 W% g( k0 esecret/kubernetes-dashboard-certs created
% S4 S$ n( X0 |% Jsecret/kubernetes-dashboard-csrf created3 H% Z9 ?/ c0 l5 W8 O
secret/kubernetes-dashboard-key-holder created5 D- w+ Q5 Z1 q+ e& q' w
configmap/kubernetes-dashboard-settings created+ f! P9 }# s. e& S+ [
role.rbac.authorization.k8s.io/kubernetes-dashboard created- T* n- U# Y& e4 }! X  M9 Y
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
) T' Z: v7 o2 W/ t8 U- }7 Urolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
: P' G$ D2 G5 E( U7 F( Y/ gclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created/ z/ ?3 x9 {' ~* T, \
deployment.apps/kubernetes-dashboard created
+ {+ L8 p& l; I+ o4 l2 K$ D+ tservice/dashboard-metrics-scraper created
% o. k- b9 H; B: gdeployment.apps/dashboard-metrics-scraper created
  k+ G2 ]+ W# y6 e9 ?! {8 t$ T7 d. Z+ _; s8 C( Q; G- {
. B0 g/ H0 P6 j( D& d

; Q3 a  Y- Y5 c#查看所有pod; o) B' M& |* F3 K% i( b3 j. `2 H% k
kubectl get pods --all-namespaces
% y5 V3 L4 P( A1 a
- Q; i" C# }( y1 C9 t) u- Y/ Z& x# s8 Q: @
[root@kubernetes-master ~]# kubectl get  pods --namespace kubernetes-dashboard ! }2 U/ [; Y1 s  v
NAME                                         READY   STATUS             RESTARTS   AGE
4 q  F+ T9 }- L3 G# t9 J# H+ Rdashboard-metrics-scraper-6fdb9d6cdd-nhs4j   0/1     ErrImagePull       0          95s. W% ~' V& ?- f3 h
kubernetes-dashboard-79d57f5458-6kmlb        0/1     ImagePullBackOff   0          95s
3 n; \1 ?2 B2 R! V6 V& [
$ G4 V+ X( V+ s. }' _! b1 ^: \+ u- }& Q& i- k9 R1 p9 M
查看报错信息:
9 M' D7 I3 _7 @$ j  _" W1 F #kubectl describe pod kubernetes-dashboard-79d57f5458-6kmlb --namespace kubernetes-dashboard
4 N9 a; K" U' Q9 ^; T( N, ^5 t# y+ F$ K4 ?% M2 B6 t7 N
Warning  Failed          81s (x2 over 2m10s)  kubelet            Failed to pull image "kubernetesui/dashboard:v2.5.1": Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
8 q8 ?/ P! I) \- ^) k  Normal   Pulling         53s (x3 over 2m41s)  kubelet            Pulling image "kubernetesui/dashboard:v2.5.1"
; ^: l  c4 D8 \. ~9 h& m& U  Warning  Failed          22s (x3 over 2m10s)  kubelet            Error: ErrImagePull0 D* B# O8 }+ o5 V) N/ c
  Warning  Failed          22s                  kubelet            Failed to pull image "kubernetesui/dashboard:v2.5.1": Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp 199.16.156.71:443: i/o timeout/ _, @4 v1 ]/ _0 p

* |  ?- k/ ]2 ?5 H/ p; m+ R  r9 ?
#根据上述信息我们来到节点尝试拉取镜像& V4 e5 D3 L/ W7 i
% Z7 W, E# }( E: }7 P7 k  Q
7 V6 A* X# @8 G  f# z  @' ?
- \. r4 E" W, c3 V+ `" o6 x1 u2 s
kubectl apply  -f recommended.yaml 2 L. y8 R6 ]0 E. a4 I
/ F% ^, N8 x, d. x( h; p
+ d  n7 u5 [* f* S

* d1 M# {! v4 }/ c5 J& y1 T" H6 t- C
; D) O6 O* I9 @! ^2 e) Z
4 t9 J% R3 u0 C$ T% e4 r

recommended.yaml

7.48 KB, 下载次数: 0

2.5.1

recommended.yaml

7.48 KB, 下载次数: 0

2.7.0版本

 楼主| 发表于 2024-9-24 16:26:34 | 显示全部楼层
Step 1 : Configure Kernel Modules and Networking
& G$ @: _, S( D5 lBefore setting up Kubernetes, certain kernel modules and sysctl parameters need to be configured to ensure proper networking between containers.8 _+ r' w3 d7 ~; J' b8 ?
$ E$ D0 J- i. h
First, we load the necessary kernel modules (overlay and br_netfilter) that Kubernetes relies on.
7 E2 F. F* j4 ^  u' F0 b9 o+ E" |9 `* I- v/ N
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
- S8 ?0 q5 B6 T/ Q" doverlay
; U. s$ U' x, J" F% [br_netfilter
+ h' A% s  j$ y6 K& tEOF
  S! r) I( d% J" Q0 C" o- c& G6 [: e/ w( V
sudo modprobe overlay
- p, q1 b% @8 \! Y: z$ y+ z7 vsudo modprobe br_netfilter3 _: f( p8 t+ p7 j
8 W. W. ]3 b7 H  K: X  \/ y; d
# sysctl params required by setup, params persist across reboots8 _" e! N6 B9 d4 @2 N  Y( f3 E1 H
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf$ a( P) x" d( U# ?* R) @
net.bridge.bridge-nf-call-iptables  = 1
! T2 K3 |! A+ z- q8 z1 Onet.bridge.bridge-nf-call-ip6tables = 1
9 b" [5 \/ |  O& _& _9 U$ R, d+ ?net.ipv4.ip_forward                 = 1
8 t4 O; j) `0 HEOF& ?) O9 H& E! q
( s8 I) E) w# h7 a& h+ A
# Apply sysctl params without reboot2 x# W! A) r- g
sudo sysctl --system        
  A& q: n. A& a. ^: b3 ]
! t/ ^3 B8 h" M( E" V) K4 z& \" E3 Y4 G$ d; G6 ^: ?
Step 2: Disable swap on all the Nodes
2 [" }/ Q. a- u/ I# Esudo swapoff -a
# J6 n. Q" u$ H, ~(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true        & R; P9 l8 k: y, `' r
Step 3: Install Containerd Runtime On All The Nodes0 y; W: i9 \/ R4 c% ~( @$ a
sudo apt-get update && sudo apt-get install -y containerd        
7 i9 D  d1 r0 ?( Z; LStep 4: Configure Containerd
) {- Z8 V8 X( psudo mkdir -p /etc/containerd
7 |  \) K" ~) F, z$ G  ?sudo containerd config default | sudo tee /etc/containerd/config.toml
4 r$ {3 q/ S/ z/ O% J$ lsudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml2 C' {1 @% L: s5 z7 A# {
sudo systemctl restart containerd        " u$ q. [$ O6 x% Q# o( z" |3 i  M
Step 5: Install Kubeadm & Kubelet & Kubectl on all Nodes
, Q9 P1 b4 ]& L$ QKUBERNETES_VERSION=1.30- Z+ S1 v  O0 ?. x7 d( L. L
& d: T- G9 X: A
sudo mkdir -p /etc/apt/keyrings
! R! H, Q8 h* K2 @9 d% j1 pcurl -fsSL https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg4 C# v, ~3 U, D& @4 ]2 M  h. K8 P
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list$ S8 Q& ?" ]8 D
sudo apt update && sudo apt-get install -y kubelet=1.30.0-1.1 kubectl=1.30.0-1.1 kubeadm=1.30.0-1.1
, Q" \) d/ r: p* \/ R& y3 Y' L; H9 V$ c" S8 H& N* A6 r
Step 6: Initialize Cluster
) ~+ n4 l2 U3 Z8 u0 p, W! r; uNODENAME=$(hostname -s)
4 J, Q- P" q  RPOD_CIDR="10.30.0.0/16"2 k7 I! r7 X& R  B: s) _
kubeadm init   --pod-network-cidr=$POD_CIDR --node-name $NODENAME         
4 M( a& X; J& j+ @4 K8 q' F- j$ j9 \# b$ A' l+ i. e

* u6 T7 v* @. X- j4 C
( r5 T9 x) k  O8 T0 L+ I
# b  Y) A5 j% o; Z. y7 EAn Open Letter to the SUSE Leadership Regarding…0 F3 l% F) B8 K1 z* [" I/ [
John Carr  3 年前# {. D. `$ N# q; q* X
Step 6: Copy Join command in workers
# A& ?6 j' `' KIn this step, copy the kubeadm join command and follow Steps 1 to 5 on each of the other nodes.
3 g/ B2 M5 E3 z' X( U8 d9 H2 J6 U& z5 l$ g. q) F5 a
; f5 ~- x( j( ?! l5 R+ \
step 7: Install CNI Plugin" J1 {6 F1 e' F2 @
Finally install a CNI. for your cluster in this example we are going to install calico for our cluster
9 a1 R# M" e) z) T4 ~
, y/ E3 `% V0 N; O' Z+ \- Rkubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml        1 A( |+ k; m# `- H0 n1 t4 ]
7 @) P0 \, Y1 ?3 ?+ k7 v

- ]# J! K  T. c3 R4 T0 ?Step 8 : Final result
5 F- b$ {! y( Jnow you should be able to run kubectl get nodes and all nodes and kube-system pods should be running
" E! N6 L+ r1 n) K0 Q7 W/ P. @8 O8 s; T# A
5 E* ^0 G$ X4 }# K2 }7 a
0 l9 J% R; n+ X7 X
kubectl get nodes
  c- T, F, Z- x3 m/ |kubectl get pods -A        
. ]% {# z5 x6 z% {- ~
( d: q) J/ ^, C7 ~* P
 楼主| 发表于 2024-12-29 21:16:59 | 显示全部楼层
镜像最近有些变化了:4 I- O) n, ]4 z/ L* g2 E) V) r
8 `6 S) e: Q5 U! o, t6 S5 j# z2 r
I1229 21:16:13.799696    2756 version.go:256] remote version is much newer: v1.32.0; falling back to: stable-1.28) G' E* W- K" }0 z
registry.k8s.io/kube-apiserver:v1.28.15
5 n7 P4 s4 d& b( n: Z5 v" ~registry.k8s.io/kube-controller-manager:v1.28.15: N& T" m$ Q' B/ S- {( I6 N  M0 U
registry.k8s.io/kube-scheduler:v1.28.15
' \& R7 ^0 p2 \+ w% }; k" K( `registry.k8s.io/kube-proxy:v1.28.151 n0 G8 i. @" H. ^% k8 r
registry.k8s.io/pause:3.9
& c( R( h% b% P2 i1 f4 Xregistry.k8s.io/etcd:3.5.9-0
' }( w8 b! |, @0 kregistry.k8s.io/coredns/coredns:v1.10.1" r& I8 f5 L# r( M7 U. X3 h8 U5 T
 楼主| 发表于 2025-1-1 09:26:26 | 显示全部楼层
' i, o/ M+ R+ [  r
在使用kubernetes 32版本0 ?- }" O9 X! E* P- d4 D% f4 D
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.8.190 --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version=v1.32.0 --service-cidr=172.29.16.0/23 --pod-network-cidr=172.22.16.0/23 --cri-socket=unix:///var/run/cri-dockerd.sock --v=5
2 f- s2 L# t3 O0 m9 J5 w1 wI0101 09:17:08.895164    2766 kubelet.go:195] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
4 A7 x5 p' Z+ q+ ~1 y! X[init] Using Kubernetes version: v1.32.0. a6 j+ y) g& ~) z
[preflight] Running pre-flight checks
# C1 ^. J, Z: F3 Q0 `6 S$ iI0101 09:17:08.912311    2766 checks.go:561] validating Kubernetes and kubeadm version
# F  p4 E; K7 c* n# p        [WARNING KubernetesVersion]: Kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. Kubernetes version: 1.32.0. Kubeadm version: 1.31.x4 u6 i0 m4 }7 ~/ q) }! L
I0101 09:17:08.912412    2766 checks.go:166] validating if the firewall is enabled and active9 Q( c/ e1 K0 j
I0101 09:17:08.925133    2766 checks.go:201] validating availability of port 6443  L. \9 h" P+ {' K$ e2 n
I0101 09:17:08.925553    2766 checks.go:201] validating availability of port 10259
! x+ t  u+ d. D1 n% I0 f9 HI0101 09:17:08.925679    2766 checks.go:201] validating availability of port 10257
, i! K0 l7 P9 p, J: C7 t, E7 Y0 hI0101 09:17:08.925766    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml$ w. |" L0 W$ N, C0 E6 `1 R9 L
I0101 09:17:08.925857    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
0 P6 b( |4 U% h5 [I0101 09:17:08.925909    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml  A3 ]8 R& S* ^0 J- Y; w
I0101 09:17:08.925954    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/etcd.yaml$ Y  o' j1 ?- F. I& n0 _
I0101 09:17:08.925979    2766 checks.go:428] validating if the connectivity type is via proxy or direct
. Z7 j7 V' P( `$ ^I0101 09:17:08.926074    2766 checks.go:467] validating http connectivity to first IP address in the CIDR) P; {( ]) b0 w/ v- u8 f
I0101 09:17:08.926142    2766 checks.go:467] validating http connectivity to first IP address in the CIDR3 h6 u3 y* C; K; h% H' M/ }) _/ M. u
I0101 09:17:08.926178    2766 checks.go:102] validating the container runtime) W4 O$ A- f! m/ d
I0101 09:17:08.927016    2766 checks.go:637] validating whether swap is enabled or not
4 Z) P, x7 I' G4 e# uI0101 09:17:08.927191    2766 checks.go:368] validating the presence of executable crictl
- q/ j* A6 z0 u6 c) k- b& x9 ?I0101 09:17:08.927293    2766 checks.go:368] validating the presence of executable conntrack
' u$ g; R# H! m* T( }: e2 MI0101 09:17:08.927331    2766 checks.go:368] validating the presence of executable ip  K& F2 Q/ L, w; K  N. z2 N  K
I0101 09:17:08.927369    2766 checks.go:368] validating the presence of executable iptables! W2 R9 ~+ e- a" q
I0101 09:17:08.927405    2766 checks.go:368] validating the presence of executable mount3 s# B$ {/ Q& f$ u# y! x( c
I0101 09:17:08.927442    2766 checks.go:368] validating the presence of executable nsenter$ k6 p; v4 E  O' |( ^) B
I0101 09:17:08.927479    2766 checks.go:368] validating the presence of executable ethtool8 b" J$ U, n) y" [. V5 Q2 W3 {
I0101 09:17:08.927512    2766 checks.go:368] validating the presence of executable tc
/ k2 B5 J2 N$ L* A3 ]  V( |$ cI0101 09:17:08.927565    2766 checks.go:368] validating the presence of executable touch7 `8 ^  Y$ C6 A7 C3 `; ~
I0101 09:17:08.927605    2766 checks.go:514] running all checks( N& O0 v/ f% E, B- @; M
I0101 09:17:08.941390    2766 checks.go:399] checking whether the given node name is valid and reachable using net.LookupHost2 @) d" N0 E! a) L) j. H1 n
I0101 09:17:08.941828    2766 checks.go:603] validating kubelet version
& O8 K7 e4 F/ e4 c+ u' E' JI0101 09:17:09.013778    2766 checks.go:128] validating if the "kubelet" service is enabled and active% m1 L4 W% s2 h8 _
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
/ g7 u! R6 l7 \$ |; }I0101 09:17:09.027024    2766 checks.go:201] validating availability of port 102504 @& n# X- V/ h% J
I0101 09:17:09.027156    2766 checks.go:327] validating the contents of file /proc/sys/net/ipv4/ip_forward2 `3 t3 t9 Q+ V
I0101 09:17:09.027330    2766 checks.go:201] validating availability of port 2379
. ]; U. h) Q2 D4 G& C4 bI0101 09:17:09.027432    2766 checks.go:201] validating availability of port 2380; n5 l3 K+ e- W3 m
I0101 09:17:09.027550    2766 checks.go:241] validating the existence and emptiness of directory /var/lib/etcd$ h; I- c' b) C5 J1 |" c) z/ A7 V1 \
[preflight] Pulling images required for setting up a Kubernetes cluster+ ]# d! O) l/ l( x0 B
[preflight] This might take a minute or two, depending on the speed of your internet connection1 b! u$ J9 P5 h( J+ U4 ^" r  z' v
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
/ E! W, h% M# d6 C/ \2 h2 B: CI0101 09:17:09.030027    2766 images.go:80] WARNING: could not find officially supported version of etcd for Kubernetes v1.32.0, falling back to the nearest etcd version (3.5.15-0)
% {7 u, V* C5 K* uI0101 09:17:09.030090    2766 checks.go:832] using image pull policy: IfNotPresent
# e2 r% ]9 B0 j* jW0101 09:17:09.031052    2766 checks.go:846] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image.
0 a$ D+ k6 P/ L& T) h. II0101 09:17:09.033771    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-apiserver:v1.32.0
3 w. }( f  c; JI0101 09:17:16.690625    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.32.0
3 J' a, E& ~2 A' X1 _4 g4 ~I0101 09:17:23.575148    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-scheduler:v1.32.0& k: M1 t) W* o3 \) H* m2 w9 C
I0101 09:17:29.427958    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-proxy:v1.32.0
; u, c6 w- T7 A: GI0101 09:17:37.054594    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/coredns:v1.11.3: @2 B) |/ G; v+ M1 _( A
I0101 09:17:43.574636    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/pause:3.10
, c  `8 q5 c6 q! l' II0101 09:17:44.929429    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/etcd:3.5.15-0
% Y2 N+ _6 M6 R. y" g[certs] Using certificateDir folder "/etc/kubernetes/pki"
) y$ y$ ~0 A, UI0101 09:17:58.489483    2766 certs.go:112] creating a new certificate authority for ca! v- S+ X! L9 ]; z7 T
[certs] Generating "ca" certificate and key
( _  V5 V4 \2 K/ {' QI0101 09:17:59.936877    2766 certs.go:473] validating certificate period for ca certificate7 I1 I3 s, ^6 |" S6 X
[certs] Generating "apiserver" certificate and key, ^0 ]4 u4 L. W8 F/ n
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.29.16.1 192.168.8.190]* |7 o2 t+ L! ^/ I' g, p/ `
[certs] Generating "apiserver-kubelet-client" certificate and key) X8 N9 D& O' E% T3 _" m
I0101 09:18:01.377059    2766 certs.go:112] creating a new certificate authority for front-proxy-ca
1 E( P+ o* Y# n# Y0 n5 X- I5 s[certs] Generating "front-proxy-ca" certificate and key: o; l7 d& i: ]4 u
I0101 09:18:03.218799    2766 certs.go:473] validating certificate period for front-proxy-ca certificate
  z+ ]/ i1 V) i# ~+ [+ x[certs] Generating "front-proxy-client" certificate and key
# K5 G6 ?7 j7 C0 Y& |% NI0101 09:18:04.332875    2766 certs.go:112] creating a new certificate authority for etcd-ca9 I. A3 p. U' ]8 R, D
[certs] Generating "etcd/ca" certificate and key
6 v$ H# N) K; v. I% ZI0101 09:18:06.977732    2766 certs.go:473] validating certificate period for etcd/ca certificate
+ P$ O  P0 {, }[certs] Generating "etcd/server" certificate and key
4 N7 o* Y9 q3 }  a, C[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]/ p- c9 w! U  j! ^% O1 \
[certs] Generating "etcd/peer" certificate and key" c4 h4 z7 v2 |& S0 ?
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]
8 v7 E" [; }: h' v[certs] Generating "etcd/healthcheck-client" certificate and key' `# Y, `0 s/ U1 I. ]/ U7 C
[certs] Generating "apiserver-etcd-client" certificate and key4 f8 o' _! W- M. `+ |  F9 Q
I0101 09:18:10.455557    2766 certs.go:78] creating new public/private key files for signing service account users
; l' X& \- H; k) A; ][certs] Generating "sa" key and public key
- E5 ?* |6 I' T/ e$ s  Q[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
8 q, x9 H( R9 Z& o' pI0101 09:18:10.855930    2766 kubeconfig.go:111] creating kubeconfig file for admin.conf% [2 K! a3 f' h% A0 ?
[kubeconfig] Writing "admin.conf" kubeconfig file
7 R5 J2 V8 V: i% w, P5 q! ]6 nI0101 09:18:11.347740    2766 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
$ l" d7 _9 u- ?8 q[kubeconfig] Writing "super-admin.conf" kubeconfig file( k6 E0 r" X, \/ a
I0101 09:18:11.688152    2766 kubeconfig.go:111] creating kubeconfig file for kubelet.conf0 ~% e2 a7 w) A" r- [* `. ]& o% _
[kubeconfig] Writing "kubelet.conf" kubeconfig file
* d! m; [& }- \8 @. XI0101 09:18:12.190293    2766 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf' W: c) P( ^& [* a& B# ]$ U
[kubeconfig] Writing "controller-manager.conf" kubeconfig file0 ?1 G- T8 x/ Y- `4 b+ ~! q9 u1 |
I0101 09:18:13.161357    2766 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
* S7 j1 e+ t) F7 D[kubeconfig] Writing "scheduler.conf" kubeconfig file
9 J' d! N/ b3 g7 i) \6 F[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
& q$ s. T- X5 X, ]  u+ z. `I0101 09:18:13.708236    2766 images.go:80] WARNING: could not find officially supported version of etcd for Kubernetes v1.32.0, falling back to the nearest etcd version (3.5.15-0)7 ]# O% W  X) L# z2 F
I0101 09:18:13.713834    2766 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
7 R7 v' V( h& T8 E, `( _4 [1 a[control-plane] Using manifest folder "/etc/kubernetes/manifests"$ c4 ^- G& X# ]2 ?7 m; \
[control-plane] Creating static Pod manifest for "kube-apiserver"
8 l+ p& g: Q! @I0101 09:18:13.714027    2766 manifests.go:103] [control-plane] getting StaticPodSpecs, z7 p$ j, |: T/ g
I0101 09:18:13.714500    2766 certs.go:473] validating certificate period for CA certificate# y; R/ e- |5 J, l
I0101 09:18:13.714687    2766 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver"( @+ Z7 |4 x; z; Q4 @6 _
I0101 09:18:13.714767    2766 manifests.go:129] [control-plane] adding volume "etc-pki-ca-trust" for component "kube-apiserver"  x; p/ ?: w, j9 O
I0101 09:18:13.714837    2766 manifests.go:129] [control-plane] adding volume "etc-pki-tls-certs" for component "kube-apiserver"
* L2 C' A6 t" T& H: iI0101 09:18:13.714857    2766 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
8 f7 L/ W% z# k5 V0 c4 L9 jI0101 09:18:13.716746    2766 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
5 O: k. l1 n  R* x[control-plane] Creating static Pod manifest for "kube-controller-manager"
4 m" H8 ^  ]3 }! P% uI0101 09:18:13.716863    2766 manifests.go:103] [control-plane] getting StaticPodSpecs- f* P5 p+ P" @
I0101 09:18:13.717331    2766 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
5 ~: f( k1 K% c6 t6 y% qI0101 09:18:13.717415    2766 manifests.go:129] [control-plane] adding volume "etc-pki-ca-trust" for component "kube-controller-manager"+ C' M$ D" S6 x$ @6 ]& Z
I0101 09:18:13.717453    2766 manifests.go:129] [control-plane] adding volume "etc-pki-tls-certs" for component "kube-controller-manager"
9 q2 I4 q* ?1 }, `- z" VI0101 09:18:13.717498    2766 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"* @) D: @1 o, k
I0101 09:18:13.717517    2766 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
$ C& M" q  ?' _: s- W3 UI0101 09:18:13.717561    2766 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"& E$ g! O+ o. D" p
I0101 09:18:13.719110    2766 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
9 w, \0 @9 E) B  f& y) l[control-plane] Creating static Pod manifest for "kube-scheduler"2 o4 S) o. m% `! f+ U7 Q! ~, C3 x
I0101 09:18:13.719208    2766 manifests.go:103] [control-plane] getting StaticPodSpecs0 I1 Q/ O' x/ ~# y, P7 S
I0101 09:18:13.719600    2766 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
/ V2 l7 o" {* zI0101 09:18:13.720666    2766 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
; {# c  ]5 g$ O, O, f& {, kI0101 09:18:13.720760    2766 kubelet.go:68] Stopping the kubelet
" Y4 Z' H7 J2 Y  s[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
: v: a+ `( \% H[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
2 g& H$ x8 _! h8 b% C[kubelet-start] Starting the kubelet
1 e- R, \# A3 L0 R. R' r. s[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"& d, r( }' ]: M# e& p
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
7 j% p1 D  h4 Y& S5 ^[kubelet-check] The kubelet is healthy after 1.517289565s
- n  M, Q/ \' f* b6 u6 u[api-check] Waiting for a healthy API server. This can take up to 4m0s# `) ^% }$ B& I# W+ P6 x# |5 f
[api-check] The API server is healthy after 17.003014698s
8 e0 I! Y$ h, a& E8 @I0101 09:18:32.522198    2766 kubeconfig.go:665] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists
6 t: h1 _8 ~, kI0101 09:18:32.524875    2766 kubeconfig.go:738] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf
9 b' W" W$ G( p& |! W1 q8 [; ZI0101 09:18:32.548427    2766 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
$ }0 h+ C9 b1 O: h5 v- N[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace9 n3 x9 i( D+ [
I0101 09:18:32.566210    2766 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap6 R+ K# s. d' o( h  }. M
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
' L: D8 L) l/ [: I; t# F8 f* lI0101 09:18:32.584736    2766 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node  b& F* {* g0 I
I0101 09:18:32.584775    2766 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/cri-dockerd.sock" to the Node API object "k8s-master" as an annotation" P# r+ ^3 \9 a% G7 T$ z6 z
[upload-certs] Skipping phase. Please see --upload-certs- [, B; R& c( H! f. e4 _  {
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
/ Q1 o- k: l( I8 h[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
: a; J: P& s2 C[bootstrap-token] Using token: mkl5ok.ttqgpoybxarwwum82 Z/ w, f; c& X& L
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles2 Q# a& I0 T5 B  _7 Q! T
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes; M) d/ u: ~, \3 s. A
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials" T3 |: \% c7 {8 i9 `
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
" K7 x# s$ ^  k! `[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster+ ^: b$ Q# i8 l2 u# `8 t5 a: b
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
3 y" `! ~# p6 ?8 g7 h! SI0101 09:18:32.662880    2766 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig0 l/ S" `% L. W; D3 S9 S
I0101 09:18:32.663948    2766 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
5 s6 ]$ Y# t3 x) U5 W& P1 _I0101 09:18:32.664531    2766 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
: g  m. }- q8 j$ C+ {/ J# w6 R2 qI0101 09:18:32.670237    2766 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace% C( N3 {" U- N- u. n* X/ |' J
I0101 09:18:32.722880    2766 request.go:632] Waited for 52.518759ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... c/roles?timeout=10s
2 s" F% C! x/ Q+ L1 n" [$ `2 E8 fI0101 09:18:32.922885    2766 request.go:632] Waited for 193.402714ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... indings?timeout=10s
1 l( I4 ^4 x5 W, i. {I0101 09:18:32.928726    2766 kubeletfinalize.go:123] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"; q0 q" [8 X" Y1 M
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
" ^" R% I9 o5 BI0101 09:18:32.931276    2766 kubeletfinalize.go:177] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation+ R8 A7 ?4 n4 m/ }& M1 R4 }. |5 w
I0101 09:18:33.171057    2766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false4 Z& N& \7 K( u& [
I0101 09:18:33.171183    2766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
8 G5 j% c. M+ S$ t" c) xI0101 09:18:33.325030    2766 request.go:632] Waited for 103.44617ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... indings?timeout=10s9 u7 L& d, g! H6 X! `7 ~1 P( O
[addons] Applied essential addon: CoreDNS
) o. g( a/ f( N/ M( @I0101 09:18:33.550063    2766 request.go:632] Waited for 94.465297ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/api/v ... ccounts?timeout=10s
- q* x2 e& L+ J* ~9 s  e8 ^I0101 09:18:33.723795    2766 request.go:632] Waited for 151.789723ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... m/roles?timeout=10s
4 C5 Y" X3 L" \* e, Z0 bI0101 09:18:33.922760    2766 request.go:632] Waited for 187.41919ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... indings?timeout=10s* X; m6 K: `6 e
[addons] Applied essential addon: kube-proxy
. ^* a8 I0 A# ]3 |6 s; [( X1 P# ^( T  K' A; l. ]% W* `
Your Kubernetes control-plane has initialized successfully!- [$ o+ L  ^% ~- c: F

3 B. h- Y1 X3 d7 D+ r4 d* Y4 jTo start using your cluster, you need to run the following as a regular user:
6 a; `: [" h$ r/ |+ x0 u3 V( A- r  @+ N
  mkdir -p $HOME/.kube; L7 f- H8 W3 a- F# c
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
" p# D, g8 ^4 M# D8 x  sudo chown $(id -u):$(id -g) $HOME/.kube/config
1 j8 F4 M! _4 A9 J) N) P+ C4 n! O3 X9 E$ G' d" F. _
Alternatively, if you are the root user, you can run:- `0 @# y- u6 W+ ^6 E( |
  V4 W9 r2 [4 F/ H
  export KUBECONFIG=/etc/kubernetes/admin.conf- D; |) Z. i- X. Z4 h

/ D( U5 W& h! t/ N; ]+ l* aYou should now deploy a pod network to the cluster.
! P% V- M5 i4 C) [: X8 sRun "kubectl apply -f [podnetwork].yaml" with one of the options listed at:" M4 H* z% V; P/ [0 N8 T
  https://kubernetes.io/docs/conce ... inistration/addons/; U  ~! v6 z  f$ d
# \. M+ R. K* M2 b* M
Then you can join any number of worker nodes by running the following on each as root:% p2 p* c! d" M
5 `& b* x, Y( P! |* C7 V* f
kubeadm join 192.168.8.190:6443 --token mkl5ok.ttqgpoybxarwwum8 \
" n( Y" j1 u) f. v7 o5 K        --discovery-token-ca-cert-hash sha256:6951f50d3ba9e40f8d175cab4b9711eeb164aa06969b72b2f12a6d881fea666c
 楼主| 发表于 2025-1-1 19:51:59 | 显示全部楼层
创建目录
; `6 ^6 c' q8 p2 m. |根据自身实际情况创建指定路径,此路径用来存放k8s二进制文件以及用到的镜像文件
* _% I, l- K) R4 u  o2 k2 F3 M1 E' t
mkdir -p /approot1/k8s/{bin,images,pkg,tmp/{ssl,service}}/ G7 }1 H1 {! J  N
关闭防火墙( h3 E  J9 _; @+ c$ h! }% E. i7 z2 t
for i in 192.168.91.19 192.168.91.20;do \
! |' m0 i* R( G/ ossh $i "systemctl disable firewalld"; \
4 I& u( c7 J3 yssh $i "systemctl stop firewalld"; \5 x' x' ]  L4 P* k  B
done
9 x: C" s$ q$ D) {4 A关闭selinux
8 z$ U. X, k8 d4 I. v临时关闭2 J8 C+ R' j+ B" Q$ I
, I" C" o6 g( c/ X( F
for i in 192.168.91.19 192.168.91.20;do \( z) P+ ^1 l9 }; Z
ssh $i "setenforce 0"; \8 A3 g6 l0 |5 n5 N8 h. G! m
done0 a- [$ Y* D6 n! n
永久关闭% b2 n0 U" I4 ^  l8 R

- k) `- Q8 j+ [/ \for i in 192.168.91.19 192.168.91.20;do \
" c/ N, C) m4 g3 N  G2 u  Yssh $i "sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config"; \' P+ s  `8 ]8 T1 j. J- P
done
: B8 u" _) E3 o关闭swap% p9 S% b+ E6 x! \# c- c7 H5 B: J4 H
临时关闭" T1 x7 v: R9 e$ `8 B
* W% U) N1 b" t
for i in 192.168.91.19 192.168.91.20;do \
$ T4 o+ K2 `/ R; Y- _ssh $i "swapoff -a"; \
" v( E7 C) h4 p  V$ O0 X& [done1 A, N% [1 r! `/ `3 h) K+ A3 x6 {
永久关闭
  f0 Z# `# J8 a5 S) s- ^9 o8 K( I& A2 q( p$ v# M( v4 T
for i in 192.168.91.19 192.168.91.20;do \, y; S# b/ P+ o6 @; k( w' D7 t
ssh $i "sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab"; \
3 }& t4 a3 y  M  ^0 S2 ]4 k4 ydone
, C- z! T- p# `9 k5 P3 ?+ d开启内核模块+ K4 `6 g7 Y  P" Z! \+ u# o
临时开启8 P; l& r$ m+ r6 B6 v

' r7 W: E0 l1 Jfor i in 192.168.91.19 192.168.91.20;do \* F3 }# |$ C, w" J* I% j0 C
ssh $i "modprobe ip_vs"; \! E, w9 K9 U# K# e& s
ssh $i "modprobe ip_vs_rr"; \
7 h2 F( D7 u; Q3 y) Pssh $i "modprobe ip_vs_wrr"; \
3 ~. i# b$ i* q$ f+ W  s, |- O( |ssh $i "modprobe ip_vs_sh"; \; H# h8 y* B5 c% z7 c
ssh $i "modprobe nf_conntrack"; \
( H6 z6 h- d" \- |# Q/ H/ Hssh $i "modprobe nf_conntrack_ipv4"; \
$ V  ]+ i9 I+ }- G# U' }  U3 _8 Dssh $i "modprobe br_netfilter"; \
- k4 ^1 a. R# C* r# ~: s, _ssh $i "modprobe overlay"; \; r" i- T) H& V+ n" z" j' x
done1 h2 ]8 C: b! z2 V; Z  G! r
永久开启
, I0 E% T! d2 G5 c
+ ?9 _$ U% U# ~$ }% |vim /approot1/k8s/tmp/service/k8s-modules.conf
! j  Y0 }( k3 R( G6 Y+ t7 o9 n+ rip_vs& N; e) Q% i  W
ip_vs_rr
2 _9 K! \) Z2 _% j8 |0 q5 A9 ^+ p9 n$ zip_vs_wrr
. j1 j. m7 K& \# [0 Kip_vs_sh
4 e1 w; \+ x& S9 w1 F' rnf_conntrack
3 v/ f; t# }$ {9 s/ m6 A5 Inf_conntrack_ipv4: B' o4 F5 f7 o" y7 Y! v0 m
br_netfilter
, u7 P- S* M# m" {overlay8 x0 M  j( q4 }$ }
分发到所有节点, Q+ j/ b. Y. W1 n" n
for i in 192.168.91.19 192.168.91.20;do \
' [0 u1 x! f$ ?( G0 pscp /approot1/k8s/tmp/service/k8s-modules.conf $i:/etc/modules-load.d/; \7 e! l( j( W, p
done
5 i3 L" G6 B# g' m) @启用systemd自动加载模块服务8 y" b) u$ p& f* a' ?
for i in 192.168.91.19 192.168.91.20;do \
0 l# Z+ u$ |: |: Zssh $i "systemctl enable systemd-modules-load"; \
( M; _/ Z. O8 H6 _0 Mssh $i "systemctl restart systemd-modules-load"; \5 L3 ~5 n; h8 Y/ ?
ssh $i "systemctl is-active systemd-modules-load"; \
1 _: l* f5 c5 hdone
$ V4 H6 i0 N3 f6 H& B返回active表示 自动加载模块服务 启动成功' `3 T1 j, P! G2 d! ]/ L
- d2 N( L' Q6 ^4 `0 h, Y$ y' C; D
配置系统参数
; q& E" q3 i6 C以下的参数适用于3.x和4.x系列的内核, h# W$ L+ M) ?
9 ]; m/ t, F' ]/ e: e$ X
vim /approot1/k8s/tmp/service/kubernetes.conf$ x# }; E9 A0 l$ V5 n
建议编辑之前,在 vim 里面先执行 :set paste ,避免复制进去的内容和文档的不一致,比如多了注释,或者语法对齐异常9 C  R% w7 H& {! Q" \

2 k6 `% v* l( u# 开启数据包转发功能(实现vxlan), ~$ {5 H  c4 g; l& ~- [/ {  Z
net.ipv4.ip_forward=1
* _" w+ t5 \0 t1 h: [6 x3 \# iptables对bridge的数据进行处理4 E2 `5 t* y9 N  H* T! O) C& D
net.bridge.bridge-nf-call-iptables=1+ I/ u8 ?. d- J( V1 `
net.bridge.bridge-nf-call-ip6tables=1# y% O$ }% T& p% J
net.bridge.bridge-nf-call-arptables=15 l" ~& ]8 R& d9 U7 ]. Y" ^* c8 t
# 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
, t( S8 J& _( K9 M/ M/ Q- D3 w$ hnet.ipv4.tcp_tw_recycle=0
! C: j5 v  j, _# 不允许将TIME-WAIT sockets重新用于新的TCP连接( Y% x. e- j  v! m* ], D4 K9 v$ l
net.ipv4.tcp_tw_reuse=0
# t! j7 K9 H  k9 [# u9 y6 `1 z" @( O# socket监听(listen)的backlog上限: I% G: B  ^/ r
net.core.somaxconn=32768
# Z, [; J5 d, ]+ L/ w; t# 最大跟踪连接数,默认 nf_conntrack_buckets * 49 N; b% J/ T9 ~
net.netfilter.nf_conntrack_max=1000000
+ l, y# v) F! h, a6 a# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它/ v1 t- ]2 R( t
vm.swappiness=0. |0 E) t4 G1 x8 e0 v4 I) V
# 计算当前的内存映射文件数。$ q+ F  N$ x5 @& l
vm.max_map_count=655360
/ l9 j3 p3 W6 T# 内核可分配的最大文件数9 N8 z" X2 f5 j( t( t/ v) X% _
fs.file-max=6553600
+ l5 o' V. x' y5 }6 Q, F# 持久连接
5 z; M& Q  {" N- P& @$ jnet.ipv4.tcp_keepalive_time=600
! {! T4 z7 q; {3 ^4 M5 I0 inet.ipv4.tcp_keepalive_intvl=30
1 z- X: A0 z& z% j/ m1 l- X) }6 y; Dnet.ipv4.tcp_keepalive_probes=10; i& J* ]3 F4 B0 E2 L$ [
分发到所有节点" C0 t" S7 Y' N  m0 g
for i in 192.168.91.19 192.168.91.20;do \' x3 j3 _4 ^7 s& @" ^* d# ]" Z: t
scp /approot1/k8s/tmp/service/kubernetes.conf $i:/etc/sysctl.d/; \
0 T; c: L+ v$ S5 udone
1 a# g3 f7 n; C& ]# B加载系统参数# f( H% L5 w% ^9 e, Y3 d! b* j
for i in 192.168.91.19 192.168.91.20;do \. U8 m4 S$ c; \/ v: ~
ssh $i "sysctl -p /etc/sysctl.d/kubernetes.conf"; \
# u8 h) t& C7 A% w6 l8 V% Q2 h# Odone* \5 u  I, J$ A2 {5 a
清空iptables规则
. U- {/ B4 P' i+ F, afor i in 192.168.91.19 192.168.91.20;do \
7 |9 h  N7 A* c4 Pssh $i "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat"; \
( a$ `! q/ s' {1 F5 Kssh $i "iptables -P FORWARD ACCEPT"; \! l9 Q5 I9 M# x# r( D; e" G# l
done
4 Y/ o/ t% h8 N% w0 L配置 PATH 变量8 w+ t8 D1 i" I; \
for i in 192.168.91.19 192.168.91.20;do \
3 N, o2 t+ O; ~& u5 ^9 q& G: Sssh $i "echo 'PATH=$PATH:/approot1/k8s/bin' >> $HOME/.bashrc"; \
, M! O3 Y) c9 f9 Mdone
+ x' f8 R. h2 |, l" S8 t. _source $HOME/.bashrc9 E2 Q- f( v7 E& z; n
下载二进制文件
* N% \1 p9 R/ S- C0 H$ O1 `其中一台节点操作即可
+ T- n6 P" I% ?2 j. |' {/ ^& \$ s& }+ _; U9 }: F
github下载会比较慢,可以从本地上传到 /approot1/k8s/pkg/ 目录下
1 K" i3 O# `. ?) D# r1 c" s
8 _% y+ B* q1 j9 t  Ewget -O /approot1/k8s/pkg/kubernetes.tar.gz \: L* s7 H$ q# j
https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz
; `" Z9 K5 r/ h# a( y: G; P6 X
* h' V; q) @' X5 r. D$ X, }$ Rwget -O /approot1/k8s/pkg/etcd.tar.gz \$ ~3 n% W: Q1 L/ f
https://github.com/etcd-io/etcd/ ... -linux-amd64.tar.gz
0 X* o$ e+ B0 L解压并删除不必要的文件
, C" v" R8 Z; `' W9 X( R4 Z1 G% g$ r$ l+ G9 |8 j
cd /approot1/k8s/pkg/
. E' g) _* F/ h* K$ o  d* efor i in $(ls *.tar.gz);do tar xvf $i && rm -f $i;done
" t* L: H/ p- L. |8 Omv kubernetes/server/bin/ kubernetes/
- ~# T5 u0 N; e& Trm -rf kubernetes/{addons,kubernetes-src.tar.gz,LICENSES,server}, Y0 D% h; A: ^6 I9 I
rm -f kubernetes/bin/*_tag kubernetes/bin/*.tar7 r3 o9 p$ H3 `
rm -rf etcd-v3.5.1-linux-amd64/Documentation etcd-v3.5.1-linux-amd64/*.md
& n8 i3 v; p0 W0 }: o部署 master 节点" I' T4 b, u1 L! p
创建 ca 根证书
  h- f, W: K) Q) |6 B5 nwget -O /approot1/k8s/bin/cfssl https://github.com/cloudflare/cf ... l_1.6.1_linux_amd64, {+ d& a: K- `* ]8 B- p* Y
wget -O /approot1/k8s/bin/cfssljson https://github.com/cloudflare/cf ... n_1.6.1_linux_amd64
$ ^& F. j, V( ?chmod +x /approot1/k8s/bin/*# v3 l4 k$ A  O/ _/ Y, w1 z) D: @/ u
vim /approot1/k8s/tmp/ssl/ca-config.json: T/ U2 R3 ^9 O( u; ?
{6 |1 D2 p( S! R# V
  "signing": {
7 y1 t# x' I$ K) |    "default": {
0 |5 ~# I! ~* Q# V0 s      "expiry": "87600h"4 C: F. ~4 F( x6 z
    },$ l' g! N  G  y  i% _; Z- X0 v! K
    "profiles": {
5 {; }- a9 D3 v1 ]      "kubernetes": {
9 t# G7 b7 D' u) N        "usages": [- g( h0 [# v" Z$ `7 s" Z6 Q; m; h
            "signing",: F, N! A: t' |) f5 e
            "key encipherment",
6 X' e9 f: G9 M! P6 D/ e            "server auth",
6 T0 l% U4 c: N5 Z, ~& ?& y" l! i            "client auth"
! |5 z. H/ e% L" q5 D        ],  S' @/ z. F9 X5 m+ t8 `
        "expiry": "876000h") h3 U+ u9 S- ?, P  n# h* Q: `8 a
      }
5 `( M) R3 D9 {' k( J  {    }) T/ f, g) j! O  _
  }2 N8 q/ Z2 O' O0 Y
}+ R- r6 \5 ^" _6 {. |& K2 X
vim /approot1/k8s/tmp/ssl/ca-csr.json8 Q9 b, h. g+ C0 R* g+ ^
{) L  w4 S$ z1 \, G1 H- N0 a* W
  "CN": "kubernetes",
: H$ `9 k, a3 e1 V: a+ g; z4 ]7 @  "key": {9 j2 Q2 E7 r0 k, z/ i& W) _, b
    "algo": "rsa",6 W/ o" {* _. S8 x8 j4 V
    "size": 2048
& t+ \- w6 G, x- u& d  T8 q" _; c; \  },
5 u9 j1 r' G9 C3 `. G+ Q4 e  "names": [
( R% m) T: x9 R2 q1 n- ^9 w9 ~    {
+ ?# W8 A) Q& S      "C": "CN",
1 }$ J; \6 E5 l' A: O: h4 \      "ST": "ShangHai",+ N, b' ?( N& e; k  I" y* g
      "L": "ShangHai"," o  j: Q7 Q, `
      "O": "k8s",
8 q  U# |- j: ^6 @5 O" Q      "OU": "System"
/ Z$ C6 \$ ]9 ]9 J6 e* j! |  R, W    }! Z" h' D. ?$ [1 {0 P( v
  ],
1 D# R1 ]$ L) C& i  "ca": {; b- Z. s: J; s' o
    "expiry": "876000h"& R$ B8 H! d9 Y. m8 b4 [# l
}
; A( B2 x& b% x}- h/ ?! b8 O5 D! \* _
cd /approot1/k8s/tmp/ssl/
1 T6 v6 A* F2 tcfssl gencert -initca ca-csr.json | cfssljson -bare ca9 X# O# W! L  F& y/ Z& U9 _
部署 etcd 组件6 H5 P2 E  A: Y3 N7 R, g
创建 etcd 证书
0 M3 Q+ ]3 ^) g9 g" {vim /approot1/k8s/tmp/ssl/etcd-csr.json& C* C7 ]0 l4 {: L2 T
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
# l4 U8 S' C  i- y  W
' }3 \7 Z! I/ |: h0 u: T注意json的格式
% C" U" S' o9 a* U7 a- l
# ]1 u% s0 c- d, i" U{
3 {- J# X( g* Z/ _  "CN": "etcd",2 x  }6 H: X% l6 W
  "hosts": [
: g8 B# z, j# j" e2 e) N% R1 n4 ~8 p    "127.0.0.1",
! H* l; w+ Z; e6 c2 A    "192.168.91.19"5 p& @" s8 C4 N$ f; K& y3 j
  ],
$ e8 I- ?4 c( V3 y8 k# Q; f  "key": {
% D) h! ~4 c! P8 f    "algo": "rsa",5 P. p1 e5 Z- w/ r# ?( r
    "size": 20482 i+ _! p8 o; U( E
  },
+ l! W. ?+ r0 `! y  "names": [
/ U" o4 `/ {. F: `- Z! N    {
5 l3 O. i4 d) Y* I, Q      "C": "CN",
5 M! H# [: H( K- |+ ]      "ST": "ShangHai",
! H  l) j8 t( i4 d, V      "L": "ShangHai",
0 z( x; `5 k' f6 E4 z      "O": "k8s",; ~( T8 y4 ^$ g8 G
      "OU": "System"
/ B  A6 Q3 v, E1 K- a    }( W2 d3 Q/ T- L
  ]
9 W! P( x. y8 ?9 R}3 v2 e9 R( h( e* p
cd /approot1/k8s/tmp/ssl/2 o3 P3 y5 F& Z' I5 D
cfssl gencert -ca=ca.pem \
2 h3 t' O& I* o. u$ f; T! {' x-ca-key=ca-key.pem \
" a7 n  ?) n9 r6 S-config=ca-config.json \
# |. Y8 z6 t2 V% J5 J-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
/ }* j& e! r4 h. }" G( W- w配置 etcd 为 systemctl 管理
3 Q, k2 I4 f" A/ Svim /approot1/k8s/tmp/service/kube-etcd.service.192.168.91.19
* W  J% r1 m4 N, S  j- E这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
6 T7 h9 l0 u1 m6 J- N& [6 A8 R+ C- O' l5 b: T* |  U. m
etcd 参数
  F+ k7 M4 C9 }5 G( d: o$ C  F1 K2 K  s8 u$ q" j
[Unit]# p( {. k3 i7 ^* s
Description=Etcd Server) d: ]/ _0 E* h) L
After=network.target+ e- e/ }1 @' [  C- @7 Y
After=network-online.target3 h  y; j* }8 B( ^4 A
Wants=network-online.target% {: z' ?  X- s! Y$ P; T* k
Documentation=https://github.com/coreos3 K3 t  I( d9 D9 ]+ c; V
7 b2 Z) u1 i+ \( Z9 c# o
[Service]
0 n) W( t- Y. z# a  ~' DType=notify
/ ~8 b5 s% l0 R8 SWorkingDirectory=/approot1/k8s/data/etcd7 }  h! ~: G6 m3 C1 v! O
ExecStart=/approot1/k8s/bin/etcd \% U! S7 i" T; C6 u3 D
  --name=etcd-192.168.91.19 \, i! P+ K' b5 L+ B
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
$ u( G5 c) i, `- Z: \  --key-file=/etc/kubernetes/ssl/etcd-key.pem \/ d5 ^5 x$ m  S, ~" x! }
  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \" y4 V7 k" t  G) k- i
  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \# `  R! G% }" R# d
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \5 H6 z! q' Y" q; @/ y
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \0 y8 \& V5 P: G) e2 [7 N
  --initial-advertise-peer-urls=https://192.168.91.19:2380 \
+ `( k0 |$ j: M3 T  --listen-peer-urls=https://192.168.91.19:2380 \
$ ~$ H/ k4 ]1 |, W7 }( V! B  --listen-client-urls=https://192.168.91.19:2379,http://127.0.0.1:2379 \
- [* V' {2 h" M2 e5 O  --advertise-client-urls=https://192.168.91.19:2379 \
9 C7 W2 O+ E* l9 ]; x4 I3 |  --initial-cluster-token=etcd-cluster-0 \
! }9 S  R; `$ j2 z6 R  --initial-cluster=etcd-192.168.91.19=https://192.168.91.19:2380 \( X4 m) c3 ]' n
  --initial-cluster-state=new \
$ E9 H$ P1 u, U; h2 _$ c; E  --data-dir=/approot1/k8s/data/etcd \* Z, v( _3 U% i# _0 U
  --wal-dir= \
: U" m6 C/ s, f+ o; S  --snapshot-count=50000 \
6 r$ g( E% O# [0 v6 S* U  --auto-compaction-retention=1 \" j0 ]- r! ~( O# S2 f
  --auto-compaction-mode=periodic \
) K# q( P/ T# y& o* g  a) k  --max-request-bytes=10485760 \) L: r& J1 [4 L
  --quota-backend-bytes=8589934592
( O6 y$ A' S# x0 GRestart=always% S6 T4 _( a2 w' H
RestartSec=15
, b  U5 G* i$ J0 P' qLimitNOFILE=65536
- P- ^3 W: G% D  Z/ bOOMScoreAdjust=-999- P' q, _; i8 {$ l! |5 t

1 K% |3 f* w. m) U, v[Install]
/ c2 X. D, |% s$ B- MWantedBy=multi-user.target' S' u- e% r# V
分发证书以及创建相关路径0 |: c9 U4 R" C8 d/ o6 p  z
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
" W, f" }" L. O% y; e5 f5 r5 R: ?8 V! I, k" F3 c& D) H& g
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败% O3 ]( `1 U" _2 k( o3 b

" k1 i8 c. I% U; J$ X. Yfor i in 192.168.91.19;do \
/ j# N# N( M1 nssh $i "mkdir -p /etc/kubernetes/ssl"; \
$ k- q6 z3 V  u3 b& \4 mssh $i "mkdir -m 700 -p /approot1/k8s/data/etcd"; \6 }% X# r9 c  @0 ^7 [; W1 S
ssh $i "mkdir -p /approot1/k8s/bin"; \* Y( x" o8 {, m  M! R" r' c
scp /approot1/k8s/tmp/ssl/{ca*.pem,etcd*.pem} $i:/etc/kubernetes/ssl/; \3 U0 A2 b  H4 `9 y. n
scp /approot1/k8s/tmp/service/kube-etcd.service.$i $i:/etc/systemd/system/kube-etcd.service; \7 A* L0 Y0 X# h) d0 c
scp /approot1/k8s/pkg/etcd-v3.5.1-linux-amd64/etcd* $i:/approot1/k8s/bin/; \
1 ^0 Y+ c3 s  Ddone8 B+ n5 v! `8 `) b6 |
启动 etcd 服务) _& n9 S+ V3 h0 ]1 O4 [( e/ R9 \
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制& O+ f' ?- }: C- @6 _7 q! a6 o

! t6 B3 Z' Q$ }- V3 z! gfor i in 192.168.91.19;do \
& u) `0 Z9 O3 g( E7 Mssh $i "systemctl daemon-reload"; \# X2 G" s" T' X0 \8 n
ssh $i "systemctl enable kube-etcd"; \
  C8 v7 C, w2 |: a+ o$ Y) Q; Tssh $i "systemctl restart kube-etcd --no-block"; \
1 w! B: _6 t* issh $i "systemctl is-active kube-etcd"; \+ Q; w  Q% H( x* y; E) i& w/ v
done! T$ @% f3 D9 Z0 y7 O* y1 y4 Q
返回 activating 表示 etcd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-etcd";done) L) x! ~; y' R3 V: `4 m2 L$ ~9 b
! s0 J, v) E9 p0 m; v
返回active表示 etcd 启动成功,如果是多节点 etcd ,其中一个没有返回active属于正常的,可以使用下面的方式来验证集群' B% u& @# {; X3 L6 F
( `$ x( J9 w' k6 y' Y
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制% v/ j% R$ s/ p0 M  m
, x6 }1 G/ e9 G$ I' R
for i in 192.168.91.19;do \) {8 p1 |" O6 }9 }# @2 ?# Z
ssh $i "ETCDCTL_API=3 /approot1/k8s/bin/etcdctl \, m! X* O' G$ p; k8 B
        --endpoints=https://${i}:2379 \
. @  l( E: y2 |: t3 y        --cacert=/etc/kubernetes/ssl/ca.pem \: a9 P% R/ u. ?+ r
        --cert=/etc/kubernetes/ssl/etcd.pem \9 J- e+ P0 x. Z4 _6 F2 c, e
        --key=/etc/kubernetes/ssl/etcd-key.pem \
7 i1 M6 `2 G0 s        endpoint health"; \9 y2 {/ C+ X" q5 R
done
9 p1 D9 ^% S/ f9 s2 nhttps://192.168.91.19:2379 is healthy: successfully committed proposal: took = 7.135668ms0 ?7 X/ W& m. ^* a6 ]/ x6 V

3 m' y6 j. i; \8 n; F$ B返回以上信息,并显示 successfully 表示节点是健康的5 H. s& q5 M+ `- z! s5 c% a

6 m; S* w+ Z, q7 j# s4 }) d部署 apiserver 组件$ r- |! u( o! Z7 q! ^  Q+ L
创建 apiserver 证书
( u/ P& m4 o7 g* x% q) l. ^2 Wvim /approot1/k8s/tmp/ssl/kubernetes-csr.json
, A8 _" l! s3 f" q4 G, o/ R* A  |这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴. p7 {! v, E( G3 [: l6 d
: u7 V  U( C& i2 J2 u# S& E
注意json的格式0 w% {2 U$ n1 G5 S$ ]- o
' i0 t5 E5 C* F: n
10.88.0.1 是 k8s 的服务 ip,千万不要和现有的网络一致,避免出现冲突
! x6 D9 X. X3 A
  A$ t% Y6 h6 b4 C{/ O% D& R- L+ T2 S4 j/ U
  "CN": "kubernetes",
5 I4 ~% [4 g3 p1 U# t6 O! L! O  "hosts": [- ]& r9 k8 B2 w* \' U' ?. o' F# Q
    "127.0.0.1",
) z# @4 _* k0 t9 W% {  K$ Z6 t    "192.168.91.19",
1 [4 M% A5 W0 e5 B% D7 \: m    "10.88.0.1",! a0 o1 t! t6 Q8 {) N* L+ b/ C: z
    "kubernetes",
( U+ e$ m$ h5 T, x, q( y" m    "kubernetes.default",8 d& q% w; |5 j1 B- ?. T3 H+ v! w
    "kubernetes.default.svc",) @# F5 `3 N2 ~, l' ]& p6 j! p1 q
    "kubernetes.default.svc.cluster"," `0 J6 b  `, f& {% B/ K
    "kubernetes.default.svc.cluster.local"
# |; q! u$ D- ~4 F$ t: W' V  ],' P: h; M' u0 Y4 k+ S5 i. R
  "key": {2 g9 i/ P* k5 r9 ]6 \1 o
    "algo": "rsa",0 {+ s. |$ v; n+ k
    "size": 20483 i% C, b$ i  B' {
  },
- b0 D4 Z6 H  i. p6 t0 F* C8 l  "names": [
$ T; M; w1 b) J' `9 ~/ z- q    {
2 _/ f5 s% @+ ~* Q4 x) o: W      "C": "CN",' I3 r0 ~+ B+ ~5 A
      "ST": "ShangHai",
* S7 K3 T" _2 V( b4 v      "L": "ShangHai",. P0 J  S$ {/ e4 ^
      "O": "k8s",
4 {0 C4 x' M. w  I% ], V      "OU": "System"
# p3 k) @) X7 A1 g    }5 s: R' o6 P$ R& I
  ]/ k$ Z$ Y% d4 Y  m8 m! G* b
}
* G5 R' |2 p/ u$ ~- o8 ~! Ocd /approot1/k8s/tmp/ssl/2 q) d5 w7 \5 q) E2 Q) Z
cfssl gencert -ca=ca.pem \
( d- V. o4 v, ~( \* M. A-ca-key=ca-key.pem \1 {/ w6 n# w0 P( C7 E7 p
-config=ca-config.json \
: w$ |4 y8 i# c5 R3 C9 q1 h" o& ~-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes& o+ G, F2 r# d5 I6 o; {5 ~% x
创建 metrics-server 证书0 C( ]% ^/ k7 A6 X# \% b+ ~
vim /approot1/k8s/tmp/ssl/metrics-server-csr.json
% j) e! _" X- D# t8 U{; c# w, P' k: q/ p9 |
  "CN": "aggregator",/ }1 E) m* N& {% D# X/ T7 p& v" L  h0 A4 e
  "hosts": [& J+ n: B( P! G! V) _
  ],
2 k! Z: [+ X5 F2 J) s4 |( W  "key": {; t" G% v' w; p) D0 q
    "algo": "rsa",. L5 ^/ n/ H: @, A/ |" Z- z, B
    "size": 2048
6 R' K3 R3 s3 `8 ]  },
( C/ \) T7 G1 _9 ~0 B8 u  "names": [9 I1 R! f3 D0 t0 d* h2 D. `9 @! U! B
    {3 i6 _. }( i0 ~1 k9 A: M. ?4 g
      "C": "CN",1 R8 i3 _2 U. O" {) g1 |
      "ST": "ShangHai",4 j- \# d/ Y. i
      "L": "ShangHai",& l; i+ P# W4 j3 K& s; D/ S  n
      "O": "k8s",
6 K3 Q. |% A1 U" S, q; C1 t  M% D      "OU": "System"# v' \" H8 ?$ o7 H0 b$ R: w9 `
    }6 S8 {6 Q* Y1 @! u
  ]  c1 Q6 j3 x' z
}  r- \8 X2 H( |( e1 D4 D* K
cd /approot1/k8s/tmp/ssl/' ]! p$ e+ m6 Y7 k; T& A8 f
cfssl gencert -ca=ca.pem \1 L- c3 r0 H/ N  L, L
-ca-key=ca-key.pem \) }8 P( V2 Q) ^+ T+ ]5 |
-config=ca-config.json \
/ {' G! _2 f% {4 ~* ~0 N-profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server
# L% B) f8 z" @: \3 {配置 apiserver 为 systemctl 管理2 u3 _0 ]& W# P8 b& {- Q& F. }4 f5 e
vim /approot1/k8s/tmp/service/kube-apiserver.service.192.168.91.193 M1 O+ `4 J# b( Q( `
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴  \' W) H$ A3 M+ ?1 h6 f
, n4 J" w& E6 x( a
--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的0 b2 s- \- S: ]# _
: @' o1 g2 n, q
--etcd-servers 如果 etcd 是多节点的,这里要写上所有的 etcd 节点
$ J, r$ i6 |( N* W; ^; [& S* S1 D' X3 o$ M$ v
apiserver 参数
% m$ ?( L$ Q7 |. @# P5 q4 |/ ]4 o5 s
[Unit], Q0 j' h8 Z& p# w- z8 a9 |
Description=Kubernetes API Server7 J. \3 M6 |4 k: }& E; p1 q) i
Documentation=https://github.com/GoogleCloudPlatform/kubernetes' `# N+ b! d0 f
After=network.target
+ m- F, R4 d: _: {: d8 z7 F5 B: J" R
1 v. @* N: Z! S2 D2 A9 t" t2 j" k$ {[Service]2 u. k) a4 a# Q' V5 o  C
ExecStart=/approot1/k8s/bin/kube-apiserver \
$ T$ P0 g: u1 {; _# c) N  --allow-privileged=true \
6 x" V6 P8 c  o, d  --anonymous-auth=false \
4 v& a" v6 ^7 N( h; \' i  --api-audiences=api,istio-ca \$ c; h/ N1 J/ l8 ~  w6 ^* k
  --authorization-mode=Node,RBAC \) X2 r# _: @: @
  --bind-address=192.168.91.19 \
5 F5 {9 B0 `+ n  --client-ca-file=/etc/kubernetes/ssl/ca.pem \% n) k* ^" b9 z( K( i+ i# Q8 Y* k
  --endpoint-reconciler-type=lease \) Q1 T6 M1 [6 ^0 S% {! s* K% V
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
% u6 `7 K! v" I1 t( C) A  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
4 Q+ X" l' l4 X, E2 z  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \0 L+ E  b+ @4 `6 G
  --etcd-servers=https://192.168.91.19:2379 \
. v9 Y6 f0 d6 Z/ I$ o5 P4 ~  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \8 ?/ |) V/ n4 b! D7 d
  --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
/ i6 i  X7 [  o( P  }  --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \: @( N! p8 C( t5 p4 \( m! {
  --secure-port=6443 \+ E$ }! @' C. X) g
  --service-account-issuer=https://kubernetes.default.svc \* Q" w! e6 r. a) p3 {2 a" U0 S
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
6 N- P1 T5 ?6 G) w# N9 x! d7 w  --service-account-key-file=/etc/kubernetes/ssl/ca.pem \+ j1 p9 _( r$ a% {
  --service-cluster-ip-range=10.88.0.0/16 \
. C' W7 U9 p( J/ k" D6 z% Y9 u2 N- n  --service-node-port-range=30000-32767 \* f: u% z# M' I$ o( o, b6 V
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
$ ]' y* [" L# g& p1 q2 }! {+ O  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \2 n  m# n) X( E7 a# D; R8 Y, M% s6 T
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
/ r" _( `# t0 S& A5 z) g  --requestheader-allowed-names= \
( c) p2 N) |5 n: S  --requestheader-extra-headers-prefix=X-Remote-Extra- \: S- v7 l0 M! {# Y, }, C
  --requestheader-group-headers=X-Remote-Group \* h3 @; ^: e3 J* `
  --requestheader-username-headers=X-Remote-User \3 J4 {" e0 r0 }9 [
  --proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \1 X5 z) T9 s' t' {0 J
  --proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \
0 P. J/ t  W% L; A  --enable-aggregator-routing=true \, c' E4 ?: t( Y' M& X' U' r
  --v=2
8 `) h4 x' H& `Restart=always# ?* l( `- |+ A3 g) x6 s
RestartSec=5) e$ \0 e) a" D- k, T% B% u8 t
Type=notify# R: b- ]1 G4 a4 ?3 Q
LimitNOFILE=65536$ @. ]" R/ J. y; k0 M6 ]
, @3 L" r: y5 k6 v' O
[Install]  m: t6 x4 r. O( a1 ~
WantedBy=multi-user.target4 p- Z& P+ N# ?" ~( o5 l7 L3 W0 ?
分发证书以及创建相关路径
  Q7 V% p( x* ?/ A* D如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制2 U% C8 w) J& r+ C$ a  M" {; g

+ Q- d6 `! w7 ~4 i对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败
- S* Z1 n+ O$ R6 O7 G5 X* }4 Q! i8 a& s6 c4 L+ u0 \. P
for i in 192.168.91.19;do \- v- B  w9 I9 H7 [3 m, h
ssh $i "mkdir -p /etc/kubernetes/ssl"; \" y2 ^3 n1 H% F; ^
ssh $i "mkdir -p /approot1/k8s/bin"; \
& O* Y5 ]3 f) S/ w3 M+ f3 R9 vscp /approot1/k8s/tmp/ssl/{ca*.pem,kubernetes*.pem,metrics-server*.pem} $i:/etc/kubernetes/ssl/; \2 X* C* g, m6 S3 m
scp /approot1/k8s/tmp/service/kube-apiserver.service.$i $i:/etc/systemd/system/kube-apiserver.service; \( K" }4 k: T* ?& `! H; I8 L
scp /approot1/k8s/pkg/kubernetes/bin/kube-apiserver $i:/approot1/k8s/bin/; \$ v# e3 H& }0 H; `
done* K- t5 X: {) s1 V: L( o2 B
启动 apiserver 服务" \( @5 ?9 L# O# U
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制. m1 X; T" T, ^% r8 M
( P) X  [7 O' ^  Q2 f
for i in 192.168.91.19;do \
$ K" P* |$ f; C1 E1 z. \+ _ssh $i "systemctl daemon-reload"; \: r2 z$ w$ Z/ x# `3 @
ssh $i "systemctl enable kube-apiserver"; \/ q9 j. @2 P/ G  i$ C: Q
ssh $i "systemctl restart kube-apiserver --no-block"; \
3 S' h# V5 f* L2 w9 |2 issh $i "systemctl is-active kube-apiserver"; \# ]# r$ [/ `# l
done
( B0 s3 I3 D# q! Q9 j, I返回 activating 表示 apiserver 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-apiserver";done5 n8 b6 d1 |3 v+ b' d
- q% L, Q/ K! G$ A
返回active表示 apiserver 启动成功
1 ?6 q5 a6 ]& z% i
0 _7 a9 I  q7 g. bcurl -k --cacert /etc/kubernetes/ssl/ca.pem \
6 R9 @9 w! H! h) _( \% ?--cert /etc/kubernetes/ssl/kubernetes.pem \
+ }- |' ^" s; h; e" x; V--key /etc/kubernetes/ssl/kubernetes-key.pem \5 D7 q, g/ q% s
https://192.168.91.19:6443/api! N0 Z- h* C+ `6 H- Y
正常返回如下信息,说明 apiserver 服务运行正常, D; q4 A% W- r

5 V% g# B0 t8 [" O9 H/ T! ?0 \{* N" ~+ n( |, H( r
  "kind": "APIVersions",7 t; ^) T. |; w% b
  "versions": [0 s# W  R; P& q5 I* i
    "v1"! _- H4 U7 t9 \1 o' u  I5 R0 \
  ],1 ^# w6 K, ^+ d8 I
  "serverAddressByClientCIDRs": [
7 E. W+ V$ q4 A/ l; o9 p! [% s    {* w' a: O. ~+ `  k' f
      "clientCIDR": "0.0.0.0/0",6 z( Y  {) z$ w0 N
      "serverAddress": "192.168.91.19:6443"
. c8 X6 u8 G# e# z+ i    }+ ?" a8 `# O' @* U' r$ O/ K
  ]
% v6 D9 r8 i4 [5 F5 k9 i7 z}5 @# S, P" r' D# g$ I
查看 k8s 的所有 kind (对象类别)7 z/ ~* L# V( j1 {# F

- I& A: P& U2 M$ Fcurl -s -k --cacert /etc/kubernetes/ssl/ca.pem \) a9 I: L4 U* F- j( v
--cert /etc/kubernetes/ssl/kubernetes.pem \% J$ g5 N1 p: y& \
--key /etc/kubernetes/ssl/kubernetes-key.pem \
. [  B0 e* ^: s3 g* r! w0 thttps://192.168.91.19:6443/api/v1/ | grep kind | sort -u: U& B! I& k6 V
  "kind": "APIResourceList",
/ e; p, R5 C; {7 k      "kind": "Binding",
* V5 t- q; Z& X/ [* N* A* P0 B      "kind": "ComponentStatus",
8 O3 I+ d! Q6 B6 y      "kind": "ConfigMap",
5 O& I/ x( \: F0 ]. v4 `, s/ P  b      "kind": "Endpoints",9 C2 j' W- |9 h! `
      "kind": "Event",0 N3 Y0 H6 A( q% o( S
      "kind": "Eviction",
! [) J: J9 |  m; j9 m3 w( W& a      "kind": "LimitRange",( d; b0 l0 g# u" X* V
      "kind": "Namespace",
" Z9 W6 y9 x. b' M+ a) E, m      "kind": "Node",. o6 z% O& N" x) A/ G
      "kind": "NodeProxyOptions",
( t) g% i2 ^% K$ Q  i: X& g      "kind": "PersistentVolume",
* g  C6 z, o! s      "kind": "PersistentVolumeClaim",0 }( D) g" W4 C* X; q3 O8 [
      "kind": "Pod",* N: `6 e# }; y" J4 I  k
      "kind": "PodAttachOptions",. p$ b9 g, Q2 f: a. n
      "kind": "PodExecOptions",# ^, X8 M* ~, w, U7 `5 S
      "kind": "PodPortForwardOptions",
; W2 j% C. c( Z4 {      "kind": "PodProxyOptions",
2 G. `) d. w4 D) Z, v, A      "kind": "PodTemplate",
/ s+ [. G5 z) B- [* e' S      "kind": "ReplicationController",
* g3 a- ?) [" ?! L. v      "kind": "ResourceQuota",1 Q& H  b( Q+ w5 z
      "kind": "Scale",
' m" |% ~2 C" W- q% k2 K8 ?# \      "kind": "Secret",
- _% w/ X6 p5 R9 G0 Q/ m; T- A1 D, L      "kind": "Service",
+ p& F! j4 x8 S! V; z! B( J      "kind": "ServiceAccount",' L  Z7 R) F# i5 N+ l
      "kind": "ServiceProxyOptions",
; o( n- g3 I' ]3 Q. D      "kind": "TokenRequest",
# p, Y/ U" I2 a5 d4 Q$ ]配置 kubectl 管理
4 E8 F/ h# Z, h. B创建 admin 证书
9 D4 O- R, |% r# ]vim /approot1/k8s/tmp/ssl/admin-csr.json
- f9 W5 p& O6 R) `{
! v) w9 o" X" n" L) G- x0 n  "CN": "admin",
% q" m8 s) E. [( d; H2 p' S  "hosts": [
  j- s4 M/ T# }  ],
! e1 v. z( s. v- h: G4 ~2 t( x  "key": {
" D4 L' n2 `/ m% i. s" r( u6 v0 D' k    "algo": "rsa"," s8 b# l* v' y# j; o: t
    "size": 2048$ {7 I0 V; |0 X
  },
9 Z) ^% H& a* a" u  "names": [' J* L! A8 O  i8 J
    {0 x2 F! [, F& N: X, U# X! U
      "C": "CN",
2 \2 y6 o0 [$ y$ D+ j4 V      "ST": "ShangHai",2 h: {( J+ c$ k
      "L": "ShangHai",
7 Z! E7 N. _/ v$ [0 j      "O": "system:masters",0 b8 M' P- y0 ~( @
      "OU": "System"
" G6 c5 g) Q+ q    }0 t- N+ S* V9 {) N7 t8 i
  ]
, g  R/ m& [3 k}
7 v) C. {+ H" c/ J- ?% ?cd /approot1/k8s/tmp/ssl/
8 u# O+ K4 q% F+ vcfssl gencert -ca=ca.pem \6 k9 q5 s& [7 i7 J
-ca-key=ca-key.pem \
* j$ t$ }% H9 `" N+ u4 t+ q-config=ca-config.json \
# i4 r4 c. R! f6 T& b# e-profile=kubernetes admin-csr.json | cfssljson -bare admin( C( h! n8 F0 N
创建 kubeconfig 证书
3 t8 F# i! ^! n% {# ?( D+ G设置集群参数
: L+ d+ D$ k) _5 E1 u+ ], R) _
; [, a9 J. I5 b- N8 [4 J--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
( h, m6 {" V$ V, l0 F# A  x- ?$ l1 o" L# i6 ?6 n
cd /approot1/k8s/tmp/ssl/
. V! [: {  n" _* D2 e/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
3 [* X9 {) k1 g$ V+ G9 F--certificate-authority=ca.pem \
* t9 A  z( h1 q. @6 u6 t" i--embed-certs=true \0 F* _* i+ T3 n) ^0 A
--server=https://192.168.91.19:6443 \, _% o' i; e5 ^0 d1 M6 ]# I
--kubeconfig=kubectl.kubeconfig! [3 k' z1 {' \9 m
设置客户端认证参数: Z5 y; o2 I! t- W, N
% a) b. Y8 ^& X0 A5 G- {
cd /approot1/k8s/tmp/ssl/. R. _' W& y% I% m
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials admin \
; @6 e1 P- s7 [9 G( }--client-certificate=admin.pem \
5 W- n9 `4 [. b$ T) ]--client-key=admin-key.pem \7 O0 H6 _  [, p2 U
--embed-certs=true \) {0 R5 L8 x/ a! W$ i+ v
--kubeconfig=kubectl.kubeconfig0 R" g( z; G  K$ f) i9 E# ^/ j' t1 I
设置上下文参数
% A8 V6 `( P4 H. O( ]# z; n9 \. Z
! G1 o9 A( Q- K& U. b. ycd /approot1/k8s/tmp/ssl/
/ I0 ?/ u% P# U2 Q5 R3 w. V/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context kubernetes \0 x* \$ P8 r3 j
--cluster=kubernetes \5 ]/ g, {' _. f! _
--user=admin \6 u% m) q* E4 q
--kubeconfig=kubectl.kubeconfig+ \0 r$ W& V7 \: k2 O
设置默认上下文% N2 y  [1 K* }) ^) `

( b: G- e' q# }9 g8 U- ?; Q! ^4 tcd /approot1/k8s/tmp/ssl/# n4 v1 j( o$ y
/approot1/k8s/pkg/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig4 n8 L! V8 x  K
分发 kubeconfig 证书到所有 master 节点( {" B7 v% w- X( X7 Z3 _
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
. B, r! v" k  b/ t
# F- A% e& a2 _+ p2 r8 lfor i in 192.168.91.19;do \0 `0 P2 P! y& m
ssh $i "mkdir -p /etc/kubernetes/ssl"; \4 o' L3 x# `% q! \; u0 u
ssh $i "mkdir -p /approot1/k8s/bin"; \5 k  z4 N/ f# y: O
ssh $i "mkdir -p $HOME/.kube"; \& ^4 t, ~6 b- m, K- q  [
scp /approot1/k8s/pkg/kubernetes/bin/kubectl $i:/approot1/k8s/bin/; \
$ j8 P1 q' p0 @5 {7 S) ?$ z& wssh $i "echo 'source <(kubectl completion bash)' >> $HOME/.bashrc"4 a- C* @! g7 h8 s9 S, s# k& S* q
scp /approot1/k8s/tmp/ssl/kubectl.kubeconfig $i:$HOME/.kube/config; \
7 Q/ a0 @' m1 ^, bdone, G) q" Q7 u& G  y4 J7 X0 f3 t0 J7 J
部署 controller-manager 组件
, A! ^) x5 q6 f4 y创建 controller-manager 证书* x! M. @% x3 w" R4 g' ]
vim /approot1/k8s/tmp/ssl/kube-controller-manager-csr.json
8 k3 K  c7 c' {( ]这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
: G1 S" L4 D( k8 V7 S: V$ g* V/ c
( B* K) T* R$ X9 C9 w. U注意json的格式
) H- Y2 M7 W# e9 r9 ?  |$ B  A: Q, s/ }
{# V( w) z. @. k/ Z
    "CN": "system:kube-controller-manager",
" b$ a( O; L: `/ E    "key": {. F( r$ R1 `- E1 U# L0 I
        "algo": "rsa",
1 P! j  g' J, h1 M, o! `5 h        "size": 2048
0 h$ \* p. i4 n# ]; b    },
8 a" |5 O  h8 r# h2 Z    "hosts": [
; `+ ]+ x  v" d* N: E9 O) p      "127.0.0.1",
: v# W; C" F" z! B/ a2 s# _- R      "192.168.91.19"# ~/ H! y& b2 r- H2 A
    ],+ U$ i5 |" f9 m
    "names": [3 f* @% G6 w9 i4 C( R
      {
2 j4 v% i6 v% q* O: _$ _        "C": "CN",
4 N- u8 a, z' T2 }) `        "ST": "ShangHai",
" m1 p# U+ C5 h# |        "L": "ShangHai",3 {) A4 E% r3 t
        "O": "system:kube-controller-manager",
) b+ J6 g8 N1 a8 T  V! [        "OU": "System"1 q2 F, E1 s* \# M8 c0 [) m) e7 x
      }, K% W3 U# \& i
    ]) D$ l6 A) `& ~2 R2 B# h$ v
}
! s8 D# b8 b( K( W" X* B2 B" bcd /approot1/k8s/tmp/ssl/- c( j+ {$ ^6 J
cfssl gencert -ca=ca.pem \8 ]; e- N& N" a7 G2 Z- i
-ca-key=ca-key.pem \
, J( C8 M* k% [' r& @1 w- b) n-config=ca-config.json \
( o% H8 [' @6 U9 I, K/ _-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager1 J2 x+ K% B" Y, P
创建 kubeconfig 证书  r+ ~/ G" t% L1 t4 n
设置集群参数  o, k6 J+ @, G5 C
( _: S$ f3 ~/ y
--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver8 y5 @, K* B) Z( }

( F3 `: C. J% f; }4 s+ Y, J* ^cd /approot1/k8s/tmp/ssl/* Q+ l* f/ `9 l6 a2 E  n4 A
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
) ^; E7 h1 b( Y$ m0 K: w( I--certificate-authority=ca.pem \
! a/ X; d& |" Y0 L* W) i8 c: d# {( j--embed-certs=true \+ S# Q4 F: }( r0 Z# C4 i
--server=https://192.168.91.19:6443 \
" U( l: ^2 Y) Y, w; G--kubeconfig=kube-controller-manager.kubeconfig
8 i3 X! p4 @/ m6 ~: c设置客户端认证参数
' w3 ^* [1 ^. q; f9 _
5 O% U) J  d; ~1 bcd /approot1/k8s/tmp/ssl/% i! x+ i% I' ^. l6 Q( M, Q# Q' w, w
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-controller-manager \
" x" P( i; u2 o% X--client-certificate=kube-controller-manager.pem \
& Y+ c1 K8 g5 x& U- K--client-key=kube-controller-manager-key.pem \
! q! j( p- Q& J& E5 g7 p7 B--embed-certs=true \: x& ~& G/ O% V. E' s
--kubeconfig=kube-controller-manager.kubeconfig( g# g* {& L; Q, X- c( [* O
设置上下文参数* C+ d4 V1 Y0 m: ]2 a

: A- r8 F3 O! hcd /approot1/k8s/tmp/ssl/
% ]: N# P' J+ r  ~/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-controller-manager \
9 Q, W3 J+ m) i* J; l" X& Y( v--cluster=kubernetes \$ Y' r; q& h6 r+ S; h/ X' X9 E! t
--user=system:kube-controller-manager \
6 `/ u- r7 s( a) ^--kubeconfig=kube-controller-manager.kubeconfig9 `. ^1 s, u: Y* \9 a/ Z
设置默认上下文; H$ f7 r+ P5 ]" G$ M

  K9 _( R2 p& V  J7 X) Kcd /approot1/k8s/tmp/ssl/6 q) v4 [, _7 |, J  i& p
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
7 |- n: N8 o9 Q5 juse-context system:kube-controller-manager \3 z! G: i  f1 Y' e- J9 y# }4 o
--kubeconfig=kube-controller-manager.kubeconfig- A# t* o* u" p# z" y2 |
配置 controller-manager 为 systemctl 管理
% Y( P6 {4 J) C7 E3 N% evim /approot1/k8s/tmp/service/kube-controller-manager.service7 S4 w4 L" ~4 b* g) U
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴- D/ v) @) o6 R* J" f5 C, P: D' m
) R6 d% u/ _+ J- f
--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的
% S9 P- E( [  ~& z# |1 [0 F9 E
--cluster-cidr 为 pod 运行的网段,要和 --service-cluster-ip-range 参数的网段以及现有的网络不一致,避免出现冲突5 W3 e2 j8 C  C2 \2 U7 n' _' j7 N/ }
: J+ ^  _( Q- l, e* J1 U: a
controller-manager 参数
' }* X2 I" _$ I
: Y* G. m! R2 b# E7 H[Unit]. v5 O2 G/ U/ R
Description=Kubernetes Controller Manager3 C1 t3 n; o* }% p& T
Documentation=https://github.com/GoogleCloudPlatform/kubernetes8 N! z9 k  [( a2 Z+ J. ^' q3 {! c1 w

7 D! [/ ^. d& Q- W9 v[Service]
- u4 N6 @2 |' z" r$ e$ JExecStart=/approot1/k8s/bin/kube-controller-manager \
% P9 x$ a+ S' j6 h5 g  --bind-address=0.0.0.0 \
/ v9 x  x4 @5 i& t2 g9 P2 J( N  --allocate-node-cidrs=true \+ z+ I# \) _+ @2 S% j3 g! J! ^2 k
  --cluster-cidr=172.20.0.0/16 \9 g2 D9 }( j7 A. M& I3 f) j! e
  --cluster-name=kubernetes \
6 ~, ?# C  c# o4 ~  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \$ F, \; g+ W% l5 M
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \# r1 f0 @0 \% v2 @! h1 Y
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
7 ]+ t5 E( V+ o" p% S  --leader-elect=true \& x8 J( h7 _0 F/ [" h
  --node-cidr-mask-size=24 \. d3 {+ k- s* h# T
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \& n/ U5 R0 o  O. @  k+ C
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
# T8 P/ y9 D$ e- k3 i! v# l! g  --service-cluster-ip-range=10.88.0.0/16 \
! g& a1 H, ~" |& N# r  --use-service-account-credentials=true \9 u# m# a* z. |5 k
  --v=2" b4 v& _% o, \6 C( S% m$ a7 ]' G
Restart=always  U8 E$ t2 `% g% }9 B- ~! M8 K0 z5 F. P
RestartSec=5) R/ q. Z. g. M: v, t; [, z5 I" ~. g! _

* r* r# J; k4 i+ z- o  J3 E- h[Install]
) Q  a/ y3 i+ U- l2 MWantedBy=multi-user.target
7 H) r9 p: E7 b/ o8 f分发证书以及创建相关路径2 m7 e: j) I# k- h7 a
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制0 }# v; K2 u& p& T! s
4 P! L" U6 E7 W3 l7 c
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败
- D8 s( q, N. d
7 W% {1 [# ]7 J8 @/ [3 @for i in 192.168.91.19;do \
$ S, S! `* C  V: Wssh $i "mkdir -p /etc/kubernetes/ssl"; \% p  R% `' {( u3 O
ssh $i "mkdir -p /approot1/k8s/bin"; \4 J1 m0 B# n& C7 o' [$ B- w, ^
scp /approot1/k8s/tmp/ssl/kube-controller-manager.kubeconfig $i:/etc/kubernetes/; \! s7 {& _0 U. h) x% o% p. [
scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
5 t# D! Q* H+ p6 e/ _' \scp /approot1/k8s/tmp/service/kube-controller-manager.service $i:/etc/systemd/system/; \
& V2 x+ f) H# l/ Cscp /approot1/k8s/pkg/kubernetes/bin/kube-controller-manager $i:/approot1/k8s/bin/; \
* ?8 P+ B- n: m6 f1 |! \done) r! C# T/ b7 h6 s' I; b) T
启动 controller-manager 服务- w' j3 h3 p* e( f
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制) r, }: p/ ?. O; {6 R1 r
: R+ k: F: k) u- b( _5 n
for i in 192.168.91.19;do \( E5 R+ M) i8 c- k) g
ssh $i "systemctl daemon-reload"; \
2 q; _( b& j4 g2 bssh $i "systemctl enable kube-controller-manager"; \5 q. g$ Y/ ?! I3 B2 ^) n6 [7 ?# C$ E
ssh $i "systemctl restart kube-controller-manager --no-block"; \7 E. Z3 V; r. z4 A7 k
ssh $i "systemctl is-active kube-controller-manager"; \
4 c) A6 K0 l0 Tdone0 E$ H' B/ ]- {4 v0 h" Z
返回 activating 表示 controller-manager 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-controller-manager";done7 m1 D: n( W9 q. z' }' i% Z
, G7 N, u8 r5 G; L$ g* I
返回active表示 controller-manager 启动成功
; {- o0 {3 ?6 {, u8 d/ d- D( j) b! P1 W8 E0 ]3 H/ a
部署 scheduler 组件: K1 S' l) O0 a% T2 C0 k
创建 scheduler 证书
! d. F8 a* u6 O3 Gvim /approot1/k8s/tmp/ssl/kube-scheduler-csr.json; S* ^- b; d0 @
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
' A* ^6 s1 V6 I
/ u: T' ?" `. U; U" U注意json的格式
9 k1 T$ u7 u5 N$ P5 q
, A! u: W8 \+ M% X- R{
' }5 R- V3 X# B7 z" x; O    "CN": "system:kube-scheduler",7 B; u& k$ g9 O
    "key": {
! \! Y) q/ d) a) S- ?* b        "algo": "rsa",
/ _. t/ ^  n: Z        "size": 2048# q  ^# P1 A& f
    },
" R4 N+ N: z* I6 o9 k    "hosts": [) J* ?- h7 R8 |8 t* c: }# ^
      "127.0.0.1",! y$ Y" C4 A  m. v5 K& m6 `5 a
      "192.168.91.19"
1 z; V. y+ c8 }) ^5 `; W: S& f    ],+ r; ^; T* o2 A
    "names": [
$ Y5 [; B% P! S0 ?" {4 P3 M8 Z/ u$ |      {
# p6 S- P9 k3 t2 v        "C": "CN",) Q! p6 E& O4 R3 I' B
        "ST": "ShangHai",
$ B8 l* x$ q, A/ ?3 B$ P- w        "L": "ShangHai",
8 s! x% p% ]& e8 f& q        "O": "system:kube-scheduler",
3 E0 k3 a$ B: y7 W2 Y- e3 L        "OU": "System"
! V0 g! k- B! n- j! K1 p      }1 {" j; ?% D/ x, a) x  [
    ]# N, @: c& b6 q2 {# M
}" O! J4 R( c- [& I
cd /approot1/k8s/tmp/ssl/
( A, x: c9 K1 n$ i2 p: W% ecfssl gencert -ca=ca.pem \
5 [( P3 M2 S/ [3 r-ca-key=ca-key.pem \
4 R- j' h$ K  x0 q6 }: x-config=ca-config.json \
$ u+ O0 v! k- ~# ^% q-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler7 b  h2 |* q$ B
创建 kubeconfig 证书
( p! {% D. h4 n( b- ~设置集群参数. A0 D! m" P0 V7 V0 }/ u
( B1 C4 B' q! p' R
--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
4 n0 e5 L+ r# L( D, y# j. |: B' k" q5 G0 w- E- G! k/ C
cd /approot1/k8s/tmp/ssl/* o, ^! S6 ^' h3 k1 w6 I( c9 A
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
; _# K6 c  R4 D--certificate-authority=ca.pem \9 [& p1 V0 b9 F: n6 P' C7 {8 g: i
--embed-certs=true \
. b3 d" p8 B( I% D2 L+ g* m--server=https://192.168.91.19:6443 \
: x% w3 X" m, H, C- w--kubeconfig=kube-scheduler.kubeconfig& p4 H! k* i9 u5 ^$ b1 N2 y: x  T- t
设置客户端认证参数* ^6 o7 @0 K! T

- f0 p! J: Q4 i* h. _$ H: Acd /approot1/k8s/tmp/ssl/; \6 {: R7 @/ w7 S; E, I5 n
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-scheduler \$ s# k7 C% t* a' Z
--client-certificate=kube-scheduler.pem \
; O2 Z; [7 l' y$ X--client-key=kube-scheduler-key.pem \
0 n6 u; c* X8 M! p6 S) ~8 v4 n0 x- l--embed-certs=true \
7 h2 W6 L' R4 e1 S9 k! O! Z--kubeconfig=kube-scheduler.kubeconfig8 `5 [. H& h# B9 h: F6 K3 e
设置上下文参数
: @) |2 |+ Y4 x; n1 p1 d0 U; L$ y' C3 ~) q+ T( l6 q
cd /approot1/k8s/tmp/ssl/
1 i0 c+ v! T; x! B/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-scheduler \) \8 G1 [$ c3 I
--cluster=kubernetes \& B# Q4 p5 W6 u% |
--user=system:kube-scheduler \
# x1 j7 q3 C$ b+ A; t! C- U--kubeconfig=kube-scheduler.kubeconfig; k, ]# z8 |9 I4 F* G+ F, H
设置默认上下文  }) ]5 U9 c; j' }4 h( f, p( Z

& v: M* A! |0 f/ `/ Y: r# m0 Rcd /approot1/k8s/tmp/ssl/- A0 ~0 Y* d" L& \5 I
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
8 ]4 V- J7 v% X: A. w  q# C+ cuse-context system:kube-scheduler \. N+ g& z9 @( ^+ X0 K6 P
--kubeconfig=kube-scheduler.kubeconfig9 M$ T) [* r! }, Z6 s' B' v6 _
配置 scheduler 为 systemctl 管理8 _! k; J  p) g4 ]/ \( I3 [& i! _
vim /approot1/k8s/tmp/service/kube-scheduler.service
0 k5 M+ |. u* f! M. ^scheduler 参数6 O$ J0 {# O* L. M! x$ Y, ~( Z

7 }1 i, \7 y2 w" Y' Z# s, Z[Unit]2 ?; \4 T" v# n8 }( g3 V: v  z* V! O
Description=Kubernetes Scheduler# D+ O/ u- J4 O9 R9 P7 i3 ~6 G. ^/ d
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
) ~* n$ B' {4 i6 O& a8 p5 c: E$ D/ i0 X6 C( E
[Service]/ t1 p4 ?7 s: H0 Y: w
ExecStart=/approot1/k8s/bin/kube-scheduler \% d5 |3 P# \" V
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \2 T# j4 p1 h5 a9 p: S
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
7 ~6 x7 V3 f" f" S) A4 f  --bind-address=0.0.0.0 \5 i* W& {* e" H' x' F* h! _& e8 t
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \: y- i% N5 s% M6 t2 x# g) Q2 T6 q$ r
  --leader-elect=true \7 ]5 I3 e, C* l: K9 l5 C( t8 n
  --v=2- g0 ^% a" b/ P( |1 v, |! T0 M3 `( n
Restart=always
* N4 ^3 e9 |$ r/ k  `. _RestartSec=5$ M) ]0 m+ }& Q& g: m/ x" C! M3 H

) v7 l* P/ Q6 s1 ^[Install]
8 \- _5 u9 }/ YWantedBy=multi-user.target
; l4 D% N3 H7 e  j分发证书以及创建相关路径
9 s3 P9 a5 N4 k% g如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
  o$ ]0 |* U, c+ l* o6 ]7 `  H/ k, p! k
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败
1 ~- P8 F. r* k* l5 u& R; P" ?9 {! ?, n  K5 e. K
for i in 192.168.91.19;do \5 f) E. U% _# v) P4 F) J* l
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
6 ]3 r+ [, b  z% xssh $i "mkdir -p /approot1/k8s/bin"; \9 j; p! _5 P- A- v; Z
scp /approot1/k8s/tmp/ssl/{ca*.pem,kube-scheduler.kubeconfig} $i:/etc/kubernetes/; \6 D/ B' U3 D& M7 c* c* g( i% s5 F' Y
scp /approot1/k8s/tmp/service/kube-scheduler.service $i:/etc/systemd/system/; \
: }) c7 I% ?* a; Q' ?scp /approot1/k8s/pkg/kubernetes/bin/kube-scheduler $i:/approot1/k8s/bin/; \' F" W7 n) u. H9 k2 ~
done/ H: b' {1 l8 i. `7 [+ o
启动 scheduler 服务
/ L* ^2 U* ^) K1 q/ E  g7 A3 O如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制: w- g0 P- c' E  f. a
0 t: @: Y; }, |7 O: v# N9 z
for i in 192.168.91.19;do \. N: ^5 V' a' o
ssh $i "systemctl daemon-reload"; \( F% y$ ?$ Y) f5 O7 f! D
ssh $i "systemctl enable kube-scheduler"; \
' `2 G3 V( {* {2 e" Bssh $i "systemctl restart kube-scheduler --no-block"; \
( |/ t6 H  u, S9 @6 v* m0 f0 J$ zssh $i "systemctl is-active kube-scheduler"; \- H' m& J4 P; H. V: q4 J
done
+ w8 F, z, X) ]返回 activating 表示 scheduler 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-scheduler";done
6 }6 g  k9 z; [" E* ]" B7 u
# {% A1 q. E9 O$ _" L( R# v8 l返回active表示 scheduler 启动成功1 L8 g, ?; e$ h$ n7 {
4 k. V( C$ R. j3 Z
部署 work 节点
0 @+ G# G( Z8 B) U9 O部署 containerd 组件: ~6 }+ h7 X$ L
下载二进制文件, t' b$ P! @. b: {3 n' q6 o/ y8 W
github 下载 containerd 的时候,记得选择cri-containerd-cni 开头的文件,这个包里面包含了 containerd 以及 crictl 管理工具和 cni 网络插件,包括 systemd service 文件、config.toml 、 crictl.yaml 以及 cni 配置文件都是配置好的,简单修改一下就可以使用了
' l! Z# B0 J5 w, ^
' V4 L8 c+ E4 w% X# x虽然 cri-containerd-cni 也有 runc ,但是缺少依赖,所以还是要去 runc github 重新下载一个0 `6 y8 K' q5 z% a" h" F0 X0 J
4 H+ W/ I1 v4 P. {. M+ Z' L7 ^
wget -O /approot1/k8s/pkg/containerd.tar.gz \+ X$ x0 d# [! O7 }0 S9 X" d4 m  M
https://github.com/containerd/co ... -linux-amd64.tar.gz
9 a6 R/ Q5 V& w; Dwget -O /approot1/k8s/pkg/runc https://github.com/opencontainer ... d/v1.0.3/runc.amd643 l  P) {8 v8 D' [% K( X
mkdir /approot1/k8s/pkg/containerd) Y" R' _7 q# l3 t4 N; z4 `
cd /approot1/k8s/pkg/
0 ~0 q0 |" a7 l' H/ Xfor i in $(ls *containerd*.tar.gz);do tar xvf $i -C /approot1/k8s/pkg/containerd && rm -f $i;done3 r& _% ]9 D: q! H; F0 p: r; W2 {
chmod +x /approot1/k8s/pkg/runc- l: Y. W! P* _" a# W
mv /approot1/k8s/pkg/containerd/usr/local/bin/{containerd,containerd-shim*,crictl,ctr} /approot1/k8s/pkg/containerd/
% W6 O; Y7 p8 [' Dmv /approot1/k8s/pkg/containerd/opt/cni/bin/{bridge,flannel,host-local,loopback,portmap} /approot1/k8s/pkg/containerd/* U% U+ {: f, N4 `, h3 }9 [
rm -rf /approot1/k8s/pkg/containerd/{etc,opt,usr}
9 U- e# V5 r4 r" ]" I  }6 h配置 containerd 为 systemctl 管理
! j5 W0 V9 |/ [) s2 A5 m& xvim /approot1/k8s/tmp/service/containerd.service
. V2 M1 M; g6 ~注意二进制文件存放路径  H) x" Z" H6 R7 S" l! v! D( m1 X
5 A3 G; X" V8 G7 h4 P
如果 runc 二进制文件不在 /usr/bin/ 目录下,需要有 Environment 参数,指定 runc 二进制文件的路径给 PATH ,否则当 k8s 启动 pod 的时候会报错 exec: "runc": executable file not found in $PATH: unknown# N8 w" ]+ X3 \
  v+ D6 ^& _& [- t2 F
[Unit]4 s# j# G& y5 y9 D5 o. o
Description=containerd container runtime
2 F& P7 Y3 _- D2 G7 rDocumentation=https://containerd.io
$ d9 i+ u* z! B7 KAfter=network.target! n$ I. N* b& R, H
; k! a& f8 Y5 \+ [! i
[Service]
$ p3 \. _! E# A9 GEnvironment="PATH=$PATH:/approot1/k8s/bin"
& R% O) [4 I, N0 n% P- RExecStartPre=-/sbin/modprobe overlay
: P6 T( C0 J# o7 lExecStart=/approot1/k8s/bin/containerd' q' k4 B6 F4 T2 K9 ~# z7 \% Y
Restart=always% x5 n7 _) i9 w: `) N- v" ~
RestartSec=5
( F' |7 _8 P0 B& N: J7 N6 c8 vDelegate=yes, v$ j3 m3 i, s, {
KillMode=process; Z* R3 m3 i# i4 h' \5 ]
OOMScoreAdjust=-999: B# J' b8 j5 m3 t7 ?8 r: h
LimitNOFILE=1048576
+ W2 n8 K5 N/ W6 y3 }. t  ]# Having non-zero Limit*s causes performance problems due to accounting overhead
# R) ?' N& j  ]% N7 O# in the kernel. We recommend using cgroups to do container-local accounting.
+ i* r, S, U+ S. _& TLimitNPROC=infinity
  A: Y; A0 N4 y% a/ a0 nLimitCORE=infinity) a+ F; M8 G  h4 w8 b1 }% u& N* l. S: P" {
7 V% q$ [& V8 b* R# R* V6 f
[Install]
, W- r8 N6 V% h  X  v& Z+ oWantedBy=multi-user.target
; U, }! d) R5 H# q% |/ Q) U: M配置 containerd 配置文件- L" g5 ^7 I$ ]8 g0 E0 v
vim /approot1/k8s/tmp/service/config.toml
' P1 W3 K9 ?9 troot 容器存储路径,修改成磁盘空间充足的路径
; I" v/ W9 i( D/ P0 l! Y
# K! k- R8 J% h7 [% a' n6 f  ybin_dir containerd 服务以及 cni 插件存储路径+ d: B) s/ X* h6 R8 @  R$ ^2 J

$ l0 n1 g4 G3 N$ M2 U- D6 psandbox_image pause 镜像名称以及镜像tag& k8 O& k' l) v
, P9 `: h1 `! D$ t  \
disabled_plugins = []
0 \7 }4 i0 t: p1 X" p; pimports = []% {# S8 s' z* N* N+ W
oom_score = 0
. ?6 g5 N0 U/ C& z" w9 Aplugin_dir = ""
" Z+ R0 x) a7 a" }. s# C1 ?" drequired_plugins = []
8 ?$ v+ G; u& r/ ~; p( [$ ]* mroot = "/approot1/data/containerd"
- q: l4 @. P/ C3 tstate = "/run/containerd"
* J0 P' M8 d' `) n) M- u/ ^4 rversion = 2/ g& Z8 i  `( |' R
. v  d% K& o5 V: i$ G: j8 f0 e
[cgroup]
$ u+ ~! H( Q2 g  path = ""/ y  O/ |9 K) k  M; E# _% x

7 m: T6 n7 U) M  r) {8 ~[debug]
9 N0 N: E6 l# z4 f  address = """ x* Z6 ^% k3 d: B
  format = """ j0 i& O# P' m5 B2 D
  gid = 0) F2 r' |* b, C0 N) ^$ a
  level = ""% M, M- Z, p: m1 p
  uid = 0
/ }. U$ V7 D8 J4 Q' |8 E6 R
: I- S% G8 u4 R* \[grpc]# W) r3 `/ k6 |/ M8 P
  address = "/run/containerd/containerd.sock"/ ~/ a$ t% _9 L! z% J- Z5 o' T: m8 D
  gid = 0
" P' S" K2 b. p, |6 k" w  max_recv_message_size = 16777216# r- Z6 n. _- C+ d7 m
  max_send_message_size = 167772166 q; m; R5 a# L  p
  tcp_address = ""5 o- b4 z# Z# F6 G, R
  tcp_tls_cert = ""$ b& j7 B" w: E! q3 S
  tcp_tls_key = ""
: Y0 }/ [( ^0 b2 Y5 I& j0 P  O  uid = 0
2 O* I6 ]1 I  n6 q
( J" Q# Q  |: s$ o[metrics]
7 U' H" Q$ ~( T8 Z  ~) ~0 L  address = ""
. a5 G: S9 g+ s  grpc_histogram = false$ w+ B$ ?7 Q, |7 B$ I) T6 ?

9 k! o) `8 l0 O$ u. c9 E- e[plugins]9 O6 g! W( h) Q
& N6 m0 h7 \; J' g/ X: M. P6 G' H
  [plugins."io.containerd.gc.v1.scheduler"]
- \) z3 t* z) P    deletion_threshold = 0
+ P$ a$ y0 m! t. b3 x2 x: d) |& m2 D    mutation_threshold = 100
% L+ P8 _# P/ C' r" q    pause_threshold = 0.02, R# j1 H: C& ~+ U& Z
    schedule_delay = "0s"
# v; x/ {) v3 \# b+ L9 h# k    startup_delay = "100ms"
' D. _" U# K' K8 f0 A* N/ p; N. m  F! [0 Z* q4 B: w) I9 e  }) N3 x; a
  [plugins."io.containerd.grpc.v1.cri"]: B! O( W$ f0 }) m
    disable_apparmor = false4 v& O) @' V& j2 c0 D
    disable_cgroup = false
2 S  M% ?3 `. w5 H# Q- [    disable_hugetlb_controller = true8 V# b' o  z7 \. z- Q8 j8 M* L: Q
    disable_proc_mount = false
2 A! p5 g* A2 ]3 T, q1 h+ J' w" x    disable_tcp_service = true
; W5 N& ^6 m$ z$ r: J! L    enable_selinux = false
, r; D0 U8 s9 H' O8 I& G$ `- ]    enable_tls_streaming = false/ M- q8 ]; H3 y9 H6 _' J
    ignore_image_defined_volumes = false- w' y! A3 C' w) a+ C5 H! Y
    max_concurrent_downloads = 31 L* f1 a' m+ `; G) Q* \; W! _
    max_container_log_line_size = 16384
8 e$ s1 D8 d6 x' V* v    netns_mounts_under_state_dir = false
' I. r! N6 t1 m8 @% d6 D/ b    restrict_oom_score_adj = false" P0 f6 F$ s# P: a# m4 d5 N  k
    sandbox_image = "k8s.gcr.io/pause:3.6"
2 q0 @6 B, E. `% b) d    selinux_category_range = 1024: L$ p* l" E7 t( C& b3 j2 _2 @+ w
    stats_collect_period = 10& a6 R/ e- P6 X& g# U; F: j3 [
    stream_idle_timeout = "4h0m0s": b& G) \+ x. v# ^& V. ]
    stream_server_address = "127.0.0.1"7 h/ P+ L0 G4 K  E6 M/ q
    stream_server_port = "0"5 x  F6 b- q3 @* `9 J. n( k: Y8 ~1 O
    systemd_cgroup = false
# G- u/ D  \$ ?- r    tolerate_missing_hugetlb_controller = true$ t; Q& {6 z1 T( r
    unset_seccomp_profile = ""& h4 V) x; p! G! Y" y& ^5 E) i; i! Y

& x% q+ `6 `, \$ X6 ]8 d    [plugins."io.containerd.grpc.v1.cri".cni]
! g! C/ ]! x4 [      bin_dir = "/approot1/k8s/bin"; U/ J+ J# }- X. A
      conf_dir = "/etc/cni/net.d"% f+ [$ L% v$ Z5 O
      conf_template = "/etc/cni/net.d/cni-default.conf"
: @) p9 S9 Y0 b9 r+ \1 s% t      max_conf_num = 1
2 C. E( A# N* s( `$ B  k4 Q: ^# }& ?# ]
    [plugins."io.containerd.grpc.v1.cri".containerd]' l/ O1 }: O" _1 I& O2 W
      default_runtime_name = "runc"- C1 Q% [' c4 R4 |; A( I$ f0 l# B
      disable_snapshot_annotations = true+ @( y+ C7 r9 f0 @* \3 J3 M
      discard_unpacked_layers = false% G4 C& p( y# }$ o. d+ b/ w
      no_pivot = false6 \" X9 B1 S/ @$ b* ]
      snapshotter = "overlayfs"" ?9 F, e3 H6 G6 B
; G4 T9 Y/ b/ ^
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
. R7 d/ a" H& A- I$ b        base_runtime_spec = ""
6 q( W* L3 M' `7 z3 }: N0 q( a' P  L        container_annotations = []5 \1 `# p- G) B) z( f6 b3 b( n
        pod_annotations = []
* K. [2 D* q* Y% R1 c        privileged_without_host_devices = false, r' f" \) G1 X( R
        runtime_engine = ""
' K/ V, ^0 Y$ a        runtime_root = ""1 U. b9 E& @: s  o1 i
        runtime_type = ""6 J* H5 x% q$ `$ E6 s8 m1 P7 _/ d

7 l- C: }2 b* m0 A6 X' c0 f) h        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]1 z8 Z' K; \9 M1 o! ~8 }

* e7 H* s' E- h# K      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
' ]1 s4 B3 @  ^' V' y
( _; h$ g) z2 P9 U+ f- L* I/ S        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
( J; K7 T6 x8 b' a          base_runtime_spec = ""
" X" h5 O- P$ {) \          container_annotations = []
% ?8 E# a' U. f$ e/ y          pod_annotations = []- C5 p. v9 a/ K& Y% {+ ^' V
          privileged_without_host_devices = false
% G0 M4 O  \) ~5 t& V          runtime_engine = ""* m& g' z, \9 r* v" @
          runtime_root = ""- e; `7 B+ ~+ D) s9 d( X3 n
          runtime_type = "io.containerd.runc.v2"" }# l2 q6 d" x  o$ D# V' J
7 p* S. m7 E% }/ e. [
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]5 w4 {; ~* F! i6 ^( s: w3 i6 n
            BinaryName = ""
' ~: a4 y7 U! b. {! `2 |            CriuImagePath = ""
0 t1 G6 M7 j$ z% V            CriuPath = ""
) |3 u# Y" c, W: b            CriuWorkPath = ""4 L2 L5 t9 q* y. [9 f: ]
            IoGid = 0
8 P/ S- L( r; x; r2 ~" |, b            IoUid = 0
7 R! U% K" T: ]' j( C% M            NoNewKeyring = false
' l9 J& h  i6 B( X( w            NoPivotRoot = false
% o! b7 o7 _5 u+ ~$ A6 z# I3 _3 ?0 c' Z            Root = ""2 \6 q, t2 C2 l% u9 N$ S
            ShimCgroup = ""1 n) j5 b6 Q) k, x, B& O6 K
            SystemdCgroup = true
  \( F3 J- y& b: g6 M+ y' d& ?/ @: Z  k
      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
% C/ j, k3 ?$ @$ D        base_runtime_spec = ""* i+ K8 o! x$ r5 A
        container_annotations = []& g3 N1 q4 ~* u3 L0 Q
        pod_annotations = []
8 Q7 e9 B+ a# A/ w" j/ p  m        privileged_without_host_devices = false
& C8 ]+ k9 g2 s% O  g        runtime_engine = ""
* z% r4 A3 r/ V0 ?" k        runtime_root = ""
( Q9 l" A: ]! N2 J5 y& x        runtime_type = ""  Y% B4 i' v+ ^
! S: O. ]  s4 `# A- s
        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
  x1 C. Q* d$ x3 F( U
9 R$ A0 p9 y0 N) J  T/ M* z' D3 ^    [plugins."io.containerd.grpc.v1.cri".image_decryption]7 P" r  I, }& p# x8 A9 Q; R1 n
      key_model = "node"
) n& Q6 B* J! Z! w9 J
  }& r5 ]( r( E    [plugins."io.containerd.grpc.v1.cri".registry]
1 y+ e6 b& U! N, R: {! M      config_path = ""
/ D, `! D! F0 t3 Q: x5 h& m2 O. Q# ?. o2 d4 z- Z( F& d
      [plugins."io.containerd.grpc.v1.cri".registry.auths]
0 y% g# e. {1 X/ \; T. R1 {. g
* ?9 D% f0 H% p" h- C5 x( [* y      [plugins."io.containerd.grpc.v1.cri".registry.configs]' M& b! d& I" t! l# l1 n( v

6 Y$ x6 |* W& ^* r4 s! u- |      [plugins."io.containerd.grpc.v1.cri".registry.headers]
' A5 I4 `! N5 c6 m/ [
: |, c( q4 ?* ]) |+ p& o$ n6 [      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
; G2 y% t; A8 F9 D        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]5 t  U2 z, A9 m- Y2 g
          endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
6 X5 t* w' k, `! F' W        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
" ?8 h! ~- [% f+ q7 r          endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
$ r' F' p! _1 @: T1 n        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]& Z3 l* h4 Z7 a' ~, q4 Y+ m
          endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]  T! b8 n0 B- H4 d0 O
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"], M- p1 q+ S+ l/ j$ F& X
          endpoint = ["https://quay.mirrors.ustc.edu.cn"]  J5 H: U& p/ U( f1 S. x( n6 Y  H

' o, H& c! M5 c! r    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
# m- m  b2 ^! u* B$ M      tls_cert_file = ""
0 z2 g, ^9 r$ K" j. p+ s      tls_key_file = ""# e. I2 F8 p% _# x
' \5 i1 v9 O# t* d3 V4 @
  [plugins."io.containerd.internal.v1.opt"]
( G0 u7 g3 v, d% d- x5 D    path = "/opt/containerd"
" n4 v' [$ b# [1 x0 h5 z) \  |* z/ j9 F6 v+ e% v9 _" r
  [plugins."io.containerd.internal.v1.restart"]
, e9 e6 p* k; }% i    interval = "10s"
$ s  T8 _6 i+ q' h
0 \. O, f' Q+ T  [plugins."io.containerd.metadata.v1.bolt"]4 w' O. U1 Q/ K5 ~, ^
    content_sharing_policy = "shared"5 E. s, c+ E! r% r
8 E9 s2 n- Y3 Y/ o! e$ }$ C
  [plugins."io.containerd.monitor.v1.cgroups"]% j3 w! }# h" o" x6 y. D
    no_prometheus = false
* o% l1 O* X: k2 X  a; s: m3 h1 O; h1 I# A+ u; Y( o
  [plugins."io.containerd.runtime.v1.linux"]
: v$ }0 c% i7 i9 e! w& f    no_shim = false. k3 U- B% Y7 a. x- R
    runtime = "runc"; q. _- E# @( z- |- M- T3 x4 `
    runtime_root = "") \% J( U4 |5 b; K1 O; |. _* f
    shim = "containerd-shim"
: {" q1 Q) P4 J* U    shim_debug = false
8 J' }6 w" ~& B# F  p. X! z, p  F- z" n3 m7 G5 w
  [plugins."io.containerd.runtime.v2.task"]
5 D* a" ~; g" b( _; B    platforms = ["linux/amd64"]/ f) ]0 U: |. ^/ D: E( @0 g6 n% M
* M: F* ?1 E. T( L
  [plugins."io.containerd.service.v1.diff-service"]
! V; t7 b0 F$ u( f    default = ["walking"]; h9 Y: K9 _. |) j" Q

; c5 Z8 J5 g# w$ Q- p% j) U  [plugins."io.containerd.snapshotter.v1.aufs"]8 S! P' x- R# S# B3 u9 C7 W
    root_path = ""
. G& k/ Y' c5 m* I$ X2 R
: E3 |5 [. R9 M( i8 s& O% ~  [plugins."io.containerd.snapshotter.v1.btrfs"]& j$ k6 A; H+ S0 N" @2 z
    root_path = ""
% M: J" U+ H2 y4 ?; w' k8 G) `% p
! S2 o* B2 q7 z2 d) F+ @. a  [plugins."io.containerd.snapshotter.v1.devmapper"]
; R, S( z% ?, W- m( O    async_remove = false1 o  ~' W1 V$ o% [. I! j  w- G5 n
    base_image_size = ""
+ o2 {9 A6 `" _* g+ D* X0 ^$ F( P    pool_name = ""
7 m8 m, _; w& {, e! h) k    root_path = ""
  U: C7 {, E6 z* j
; g. ~8 ]) \# T9 \5 b$ f( p3 T  [plugins."io.containerd.snapshotter.v1.native"]
' g0 ]# p/ a" v2 S1 V    root_path = ""# D$ `$ k* m: @7 Z" k$ |0 K
5 k3 x  R, v3 T- o. p6 i
  [plugins."io.containerd.snapshotter.v1.overlayfs"]. `$ O8 a% ?9 D
    root_path = ""; w2 ]4 J1 h7 e: W  F2 {' w* B

1 W6 o9 _/ V; Y/ X$ k) l, Z  [plugins."io.containerd.snapshotter.v1.zfs"]
) ]* y- i0 p8 f3 \9 s2 ~9 H. ]    root_path = ""! N; C. B$ T$ @2 C, E, A6 F7 J3 _

4 W. Y, E7 I! L$ M$ R) A[proxy_plugins]
% k( A7 |: x7 |; {- c7 u& |
& E# z1 ?# v. ]7 C# M[stream_processors]
8 h6 o& q9 [9 l2 x- f; w2 i
2 p5 W& I% ]1 j  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
: @4 d; e; |& `& U3 C# Y7 U    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
4 _. u% ^6 z# X# `* C) V    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]) Y/ ~/ ?' j' W; O$ m, c% [
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]- X% \' K9 {& S' D
    path = "ctd-decoder"1 J  \4 b! o% C- G5 }
    returns = "application/vnd.oci.image.layer.v1.tar"* v' T0 G0 k" h6 V
  M" G& a6 v2 m+ ^
  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
# s5 Z4 O+ M! U6 a    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]' a" ?2 U  t. T. _& Y4 X( N" e
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
9 A& P/ G- G& G1 P% B; b7 E    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
& y  F% b- k* R) T, j- t. F    path = "ctd-decoder"
4 x5 H( ^- O2 v- X* {8 _    returns = "application/vnd.oci.image.layer.v1.tar+gzip"- P  u6 W( j% c' _& ?' ]
1 _2 u0 x7 K  G
[timeouts]
+ ^4 _+ q  ?, u  J, r; l- P  "io.containerd.timeout.shim.cleanup" = "5s"
# m  G# y4 C2 I1 U  "io.containerd.timeout.shim.load" = "5s". y1 u1 E# V( A9 ^8 ^
  "io.containerd.timeout.shim.shutdown" = "3s". ~3 j6 z( u" z0 L+ ?% N
  "io.containerd.timeout.task.state" = "2s". \0 U- J0 n" ]7 B8 D0 G9 A

0 L  \3 ]7 R- I' q: ~; b[ttrpc]
( X0 s. S% J3 J7 C5 N* L" `9 J1 B  address = ""9 p$ }( e2 ^* M7 ~' D1 d
  gid = 0
. C( S* i8 f) w" Q  uid = 0
+ H$ B5 @8 P  z配置 crictl 管理工具
' r. H! \8 ?6 `. w! R; [vim /approot1/k8s/tmp/service/crictl.yaml
$ T8 f7 Q$ [0 B: a9 U' Wruntime-endpoint: unix:///run/containerd/containerd.sock
2 k2 J4 H! G; i# G配置 cni 网络插件+ n  F$ I3 b, k1 g$ y
vim /approot1/k8s/tmp/service/cni-default.conf
5 Q- a8 _/ f9 f( U5 `subnet 参数要和 controller-manager 的 --cluster-cidr 参数一致! |4 S; I+ |& U( z
! h; H3 w9 s" w
{' K; c! \" e1 ^; i
        "name": "mynet",( m7 \' R0 E% v5 O4 N
        "cniVersion": "0.3.1",: p4 Z" C' M* S4 e/ ^
        "type": "bridge",/ ~1 U6 s8 T3 Z- b7 K- l
        "bridge": "mynet0",
* `7 q  H* ]: o- [) \6 K) o        "isDefaultGateway": true,5 `  N" l# V# U$ P
        "ipMasq": true,) c. R; J: G" f+ {# t6 ~; J
        "hairpinMode": true,% ?3 c$ N2 w" g9 _
        "ipam": {: x; ~+ @; h* N7 y  @7 \- Z9 f
                "type": "host-local",
1 q4 Y0 v6 n; z+ ~; U                "subnet": "172.20.0.0/16"5 m% f5 G( i7 e3 c) K+ c5 t
        }7 h. o; \( z6 V) c2 {& H7 c- L
}
9 Z! V& S- d2 w# j3 y8 ?* F分发配置文件以及创建相关路径* ?3 F6 l. |5 N+ Y2 ~2 Y# G! O
for i in 192.168.91.19 192.168.91.20;do \6 ]- I- l# R( I( o5 n- }
ssh $i "mkdir -p /etc/containerd"; \6 M+ a! X' x& q7 T  n& g$ ]" {
ssh $i "mkdir -p /approot1/k8s/bin"; \
! p5 R/ h  x/ a, |4 U& }" ~9 ossh $i "mkdir -p /etc/cni/net.d"; \
& X" j8 O  B- ~% n3 C# Dscp /approot1/k8s/tmp/service/containerd.service $i:/etc/systemd/system/; \
4 n1 A, }( m8 u  g* I9 Jscp /approot1/k8s/tmp/service/config.toml $i:/etc/containerd/; \
( E) x: P6 X- c+ |% pscp /approot1/k8s/tmp/service/cni-default.conf $i:/etc/cni/net.d/; \
% }; G# D( w& ]. uscp /approot1/k8s/tmp/service/crictl.yaml $i:/etc/; \. S- H% l% e6 p) j  u- T* k# P
scp /approot1/k8s/pkg/containerd/* $i:/approot1/k8s/bin/; \
! y1 \6 s, q( escp /approot1/k8s/pkg/runc $i:/approot1/k8s/bin/; \
" W. v, O! z, N( Q. m) l7 W. l) h- [done
' d1 j$ o; Z' c& d启动 containerd 服务. B2 ?9 C/ y- ~; B& f! H
for i in 192.168.91.19 192.168.91.20;do \9 b: k, T! D$ K9 a- ^9 e6 p: t4 z
ssh $i "systemctl daemon-reload"; \
7 a& \, R1 }4 Z, kssh $i "systemctl enable containerd"; \
. f5 M# h- l4 N. D! Ussh $i "systemctl restart containerd --no-block"; \' V4 S% ?6 H" G) P9 |- H6 J
ssh $i "systemctl is-active containerd"; \
  ]; M' ?6 V* E$ |5 }, P; O5 Wdone6 W- N0 O; A- g8 s/ S9 Y. h6 U3 U
返回 activating 表示 containerd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active containerd";done$ C2 v4 L9 b( w) R8 p5 V

1 r: ^4 C$ I) g  s  {4 \返回active表示 containerd 启动成功6 N+ O& Y+ l5 i* d6 Q
' L% d7 [# ?- ^- l( V
导入 pause 镜像* V' _2 H3 }) N( q; Y7 }8 @
ctr 导入镜像有一个特殊的地方,如果导入的镜像想要 k8s 可以使用,需要加上 -n k8s.io 参数,而且必须是ctr -n k8s.io image import <xxx.tar> 这样的格式,如果是 ctr image import <xxx.tar> -n k8s.io 就会报错 ctr: flag provided but not defined: -n 这个操作确实有点骚气,不太适应1 Z" H; T( ]( w1 z& T  D# C
# D4 t" k+ L7 D% t/ r5 b
如果镜像导入的时候没有加上 -n k8s.io ,启动 pod 的时候 kubelet 会重新去拉取 pause 容器,如果配置的镜像仓库没有这个 tag 的镜像就会报错2 {# V2 g( R; G1 Q& U

* ^, `! d" L) U0 ^2 d. M' R% gfor i in 192.168.91.19 192.168.91.20;do \
. \1 ]4 I+ W7 Rscp /approot1/k8s/images/pause-v3.6.tar $i:/tmp/
  f, W1 X$ r+ w" p, Zssh $i "ctr -n=k8s.io image import /tmp/pause-v3.6.tar && rm -f /tmp/pause-v3.6.tar"; \
& {$ Z: A; f% V& \done) Z5 T' ~: V3 n1 D- L
查看镜像3 s6 q4 x8 x1 q  Y5 g
, D! }: _8 u9 B+ i* l! d
for i in 192.168.91.19 192.168.91.20;do \
: d4 A; R! F- \" ^ssh $i "ctr -n=k8s.io image list | grep pause"; \
4 ]  b+ z2 W- \9 W: mdone7 w" S/ k5 h4 c* [1 B
部署 kubelet 组件
; m/ {9 m5 M. t  L创建 kubelet 证书
6 n; ?3 Q+ f5 i9 ^vim /approot1/k8s/tmp/ssl/kubelet-csr.json.192.168.91.195 G' l6 Z0 q- p8 h  k
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个json文件,json文件内的 ip 也要修改为 work 节点的 ip,别重复了/ @% }# _/ h- O* k
1 }4 i2 r! I2 I. w5 s! X2 m
{9 m. N; L! ]3 j7 r
    "CN": "system:node:192.168.91.19",6 ]( _/ j0 q5 ]; J2 d
    "key": {, r" a" B5 q9 K: C- O0 X% M: R
        "algo": "rsa",6 i: T) h7 Z/ U4 [* l$ y
        "size": 2048/ Y; ~+ Y6 y0 k; Z( \; o
    },/ ?: b7 I0 C4 V4 L8 o
    "hosts": [
7 f' b" `/ f9 _, ^      "127.0.0.1",
- P# w% A/ u5 g      "192.168.91.19"+ y1 ~% B3 [8 L8 u  W
    ]," t% ?! Y3 M* q
    "names": [
9 _, |+ C- `8 Q. P: _0 A# ]& `      {; K4 h# _/ r8 X3 U: m, K9 P+ m
        "C": "CN",
- k" Z: Z+ `, ?6 m: R7 N) ]6 y        "ST": "ShangHai",' c9 r: t9 [0 r
        "L": "ShangHai",
) e' a+ E7 i% f3 [+ ]+ \        "O": "system:nodes",
  L5 G. h" ?6 Z# P; d' E        "OU": "System"
! f3 e5 O6 l% t* \1 E' l      }, ~% D% e- ^0 {( V- J
    ]( h9 n( H9 \! ]* r% i: s& H
}
; s  k9 G8 ?/ y8 c% e' Cfor i in 192.168.91.19 192.168.91.20;do \4 {) t! h6 M# C4 q$ t: V
cd /approot1/k8s/tmp/ssl/; \
5 K# G/ G' d( k1 u& ]6 O; M" rcfssl gencert -ca=ca.pem \8 K$ A. o3 p) S8 R
-ca-key=ca-key.pem \
+ b  H- g4 {: A) c: p-config=ca-config.json \
- Z7 N" \# K: [7 h* D-profile=kubernetes kubelet-csr.json.$i | cfssljson -bare kubelet.$i; \& M7 |/ m! C# L# T7 G
done# ^' y8 ]; W0 w4 F3 l6 X$ \4 E
创建 kubeconfig 证书- \( E. _% d5 q2 A2 `
设置集群参数
. }; G5 c/ B* [; V
& \- b0 N" Z  P2 x7 [# P) e3 y--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver2 H9 j' ^6 y# J6 v% s0 F* x

! o/ Y1 r4 m- o( k0 |" Y1 f3 cfor i in 192.168.91.19 192.168.91.20;do \" |( ^4 n/ c7 N
cd /approot1/k8s/tmp/ssl/; \$ }! [8 P' S) t* ?3 }$ b+ P
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \8 R1 Y4 z# U- R9 c, W- |
--certificate-authority=ca.pem \
5 J% d' p4 m$ n/ q' S9 ?--embed-certs=true \
$ s, {1 f% Y+ z8 i! g2 U6 W7 W# a# E--server=https://192.168.91.19:6443 \% [, b# a( r5 K, a
--kubeconfig=kubelet.kubeconfig.$i; \  M" T- o- @6 `( P, A$ ?, n5 y/ s
done: P, [2 r) h& t/ `! u( @* i
设置客户端认证参数
2 b/ L- E! }: D3 b8 t2 U' g( G9 F
, b/ k9 l- z+ ^) H7 y7 ^" ^for i in 192.168.91.19 192.168.91.20;do \3 K2 `& F% T6 O- s2 b* c, d$ _
cd /approot1/k8s/tmp/ssl/; \1 n  I3 m0 D' V" K  D) D/ s9 K
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:node:$i \
: v% A5 j" o" ~7 D  j--client-certificate=kubelet.$i.pem \9 ^; B8 Y8 g' `  \( R
--client-key=kubelet.$i-key.pem \3 Q$ I' T7 ^) T" j
--embed-certs=true \7 P- A( p, l3 ]% m
--kubeconfig=kubelet.kubeconfig.$i; \
- F, x" L9 I9 ?/ {  @1 \done& p! z8 _$ X1 t+ `% {
设置上下文参数( w2 k; d" n7 w7 A
# d" a& t# L  p3 [6 d: i
for i in 192.168.91.19 192.168.91.20;do \+ J- j+ N% m, ?: z; [
cd /approot1/k8s/tmp/ssl/; \
) e8 i! Q9 L9 h- [! M/ f/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
  G+ D- y. ^( V% \7 f--cluster=kubernetes \" J# u4 ?$ K+ Q' l1 S0 G
--user=system:node:$i \
0 x3 x8 e$ d  W  _  v  Y) t--kubeconfig=kubelet.kubeconfig.$i; \
6 W" C1 ]2 Y: u4 u; \1 Ndone6 c" W) J4 ]' m) M, h; G# K; C
设置默认上下文
6 L$ ^4 ]( V+ t) k9 K# N% Z; n* ~8 o* J/ I! a
for i in 192.168.91.19 192.168.91.20;do \8 l* x1 N  |1 ^& a& ~
cd /approot1/k8s/tmp/ssl/; \- w% W4 _" y4 v" q3 \7 s
/approot1/k8s/pkg/kubernetes/bin/kubectl config \' A  c% Z: L1 @* R9 j/ Z
use-context default \" Y4 c3 _  f4 D4 h
--kubeconfig=kubelet.kubeconfig.$i; \1 R9 e; Y, J! f4 `
done
: Y% H, W! |  n5 c配置 kubelet 配置文件8 \  w0 g- Z: B/ J3 |
vim /approot1/k8s/tmp/service/config.yaml* y/ S# I0 w9 V
clusterDNS 参数的 ip 注意修改,和 apiserver 的 --service-cluster-ip-range 参数一个网段,和 k8s 服务 ip 要不一样,一般 k8s 服务的 ip 取网段第一个ip, clusterdns 选网段的第二个ip
/ K2 @5 }) l/ o7 m2 w# i: |3 z% Q5 h$ I0 ~* a( i3 ?
kind: KubeletConfiguration/ I$ R' R. ^% L4 O; j
apiVersion: kubelet.config.k8s.io/v1beta1
4 R# J0 p6 i. p5 Faddress: 0.0.0.08 C- T4 O5 I5 ?% O$ G: I  N! d
authentication:: P' u: N* G) F6 c6 k1 ^
  anonymous:
# M7 L2 w! ]$ R. f/ I  t* w    enabled: false( c) \3 \5 [  G& Z
  webhook:- G5 I( c6 G% J4 r0 w
    cacheTTL: 2m0s# d$ \3 e, a3 k: J; D: r; ~
    enabled: true
6 X% L. ?% r! U& {% b  x509:
+ {7 U# {! t% v6 N1 M: @5 b: r! Y    clientCAFile: /etc/kubernetes/ssl/ca.pem
1 j0 W1 c) V- |authorization:
' ^$ @: G  d: M; E  mode: Webhook
: T! a2 Y# P# @; a9 P. Q/ L* ~' ^0 x- O  webhook:
7 d! W% g% ]- v  p0 ~4 K    cacheAuthorizedTTL: 5m0s$ I7 P  r2 c  V# A0 g- c9 @
    cacheUnauthorizedTTL: 30s
$ b7 I  w+ m/ U9 p# s; t7 TcgroupDriver: systemd
, k9 S/ m( Z6 i& ZcgroupsPerQOS: true
4 {0 Q6 U: O2 ]# t! jclusterDNS:
% p/ Q0 P$ {% R, y6 w- 10.88.0.2
5 B/ q# @: C) q; N, i4 oclusterDomain: cluster.local
# S$ C6 Z1 f  ZconfigMapAndSecretChangeDetectionStrategy: Watch$ P; o$ C+ @# C# T* ?; T
containerLogMaxFiles: 35 c1 R1 ~) l) Y6 L! h' R- T9 n
containerLogMaxSize: 10Mi: e8 Z6 v! N0 u
enforceNodeAllocatable:8 i5 N% G% r9 E0 |* ]% d
- pods
. q8 ?' ]& d$ l& W9 y  X+ feventBurst: 10
5 Z7 g8 A( E4 F# ^- s: seventRecordQPS: 5
( W& y' g2 I6 `! AevictionHard:
- n$ k, p# a4 }3 G5 p  imagefs.available: 15%0 ^) X3 I# U  b- k! T7 w
  memory.available: 300Mi
& T$ X  @/ W: ~* ~3 \) s  nodefs.available: 10%
' w0 S3 g) P% j* Q3 q; G) [5 S$ d  nodefs.inodesFree: 5%
6 d% ?: {, q, |, }1 d2 M$ aevictionPressureTransitionPeriod: 5m0s
0 F2 o3 {2 }5 B$ YfailSwapOn: true. y' N  N, E9 V
fileCheckFrequency: 40s
% g9 \+ _! |; s& WhairpinMode: hairpin-veth
: Y2 ^- _5 `; H% A4 u/ u- E* OhealthzBindAddress: 0.0.0.08 D% J$ I! F" f& t
healthzPort: 10248
4 d9 i0 h2 J& o7 ~* ]2 |httpCheckFrequency: 40s7 y" `0 q# m/ W2 ~
imageGCHighThresholdPercent: 851 L! @3 X+ X2 Q8 Y
imageGCLowThresholdPercent: 807 |$ Z8 s  i7 V) W3 h
imageMinimumGCAge: 2m0s
' S' T" y' \* m* F$ X6 Q7 R1 f: p* NkubeAPIBurst: 100
; m' K1 {: n! i* _. }; O) ]kubeAPIQPS: 50
- C: d2 B  `  w% L+ ZmakeIPTablesUtilChains: true' S# }& T! o4 Z# k2 \- J
maxOpenFiles: 10000003 N. y* J. H& Q' _& l
maxPods: 110/ G0 D! [3 @9 Y2 a
nodeLeaseDurationSeconds: 40
5 P% C1 h; H; `! |nodeStatusReportFrequency: 1m0s8 o& T# C1 f5 G- X7 f$ W. H2 {
nodeStatusUpdateFrequency: 10s, V' \/ \8 [( U1 j6 S: H
oomScoreAdj: -999$ F2 O$ X5 ~8 q& H
podPidsLimit: -19 O8 V6 P# ^, f" f6 z8 w% x
port: 10250
5 _- k1 c4 e, X0 t% \# u# disable readOnlyPort
* C% E: }5 F' W$ c. ?* _  YreadOnlyPort: 0
, t4 g- u% J8 q5 b- SresolvConf: /etc/resolv.conf
# i# a* p  E, WruntimeRequestTimeout: 2m0s
4 t1 Z$ b  \1 q& K! o8 jserializeImagePulls: true
  C- i+ I% p& v5 Y" ]1 W# OstreamingConnectionIdleTimeout: 4h0m0s' K/ P( f% c) f3 D
syncFrequency: 1m0s
% ]/ y: D9 u) L+ ?! vtlsCertFile: /etc/kubernetes/ssl/kubelet.pem: o* y! ?3 K6 f- U
tlsPrivateKeyFile: /etc/kubernetes/ssl/kubelet-key.pem
( p+ X0 z% q2 Z5 A* f配置 kubelet 为 systemctl 管理$ r/ \- B2 Q+ M! P! V- O3 z# O/ D
vim /approot1/k8s/tmp/service/kubelet.service.192.168.91.19
# k! s% D% s$ j3 _$ d- g% _) D( ?这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了# T! Q: n. {  [7 `
. @1 T# L- _8 I2 m
--container-runtime 参数默认是 docker ,如果使用 docker 以外的,需要配置为 remote ,并且要配置 --container-runtime-endpoint 参数来指定 sock 文件的路径
1 ^; ]% F8 O, l! p. q. g% I( C1 `
kubelet 参数
) ~* [& U* a' _& K+ S5 J
& s% Q" c+ y. u$ f- y) G5 f[Unit]
4 s4 \9 ^# P- |+ m$ kDescription=Kubernetes Kubelet
8 b9 C% D9 s  J& JDocumentation=https://github.com/GoogleCloudPlatform/kubernetes- }/ f" ^6 k* z/ B/ J4 Q

- ]# w* n! E& s[Service]
/ L. ?. ]% Y) s4 Z& N( cWorkingDirectory=/approot1/k8s/data/kubelet/ b( U2 x9 j; q& x  u
ExecStart=/approot1/k8s/bin/kubelet \$ i7 t8 l! O& G. J4 S' w
  --config=/approot1/k8s/data/kubelet/config.yaml \+ {+ T8 V/ g& d% [) j% [
  --cni-bin-dir=/approot1/k8s/bin \' {0 Q" f% b+ m6 ]: H. P: d6 j; n
  --cni-conf-dir=/etc/cni/net.d \
0 N% y2 R1 _: e2 q  --container-runtime=remote \
" c# v8 F5 L9 T  --container-runtime-endpoint=unix:///run/containerd/containerd.sock \
. J8 W" f0 f9 h  --hostname-override=192.168.91.19 \. M  M9 n. \# X( z
  --image-pull-progress-deadline=5m \
7 q1 s/ `. F8 m) M* U$ g; F  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \1 Y& m/ Y$ ~2 B* S& F# `4 ]$ l
  --network-plugin=cni \
+ X! }' [! f  t1 L  U  --pod-infra-container-image=k8s.gcr.io/pause:3.6 \: G! k& }; h/ k( ?! O) b% l6 o
  --root-dir=/approot1/k8s/data/kubelet \
; s6 Q5 \6 C6 A$ m& ]2 v& K6 k  --v=2
1 {! p! O( o9 H1 @Restart=always
$ O) D- C/ U. [' m0 a! LRestartSec=52 j2 Q2 s" @4 e- @! Z
4 Q. L3 o4 M; u
[Install]
( a0 o7 p1 i# Q8 z- ^- vWantedBy=multi-user.target6 F) R4 M0 k8 h- c$ b
分发证书以及创建相关路径
2 y: X  l7 w" G5 r4 P) \5 d如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制4 L4 a0 g% t7 {5 I8 r

* ]8 S! l: q& }8 ~- _& u% }- Z对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败
: ]' [" q4 N* ]! b' E1 {/ T, n8 M2 m3 x
for i in 192.168.91.19 192.168.91.20;do \
) g2 w# e" ]( a1 p- S) p/ _4 P# kssh $i "mkdir -p /approot1/k8s/data/kubelet"; \
& \& @# j% Y5 n- U" _ssh $i "mkdir -p /approot1/k8s/bin"; \
* K1 H( O. ~6 g. k' essh $i "mkdir -p /etc/kubernetes/ssl"; \3 [' O0 @- E1 F+ e
scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \. \, c% S- b, A. M4 c$ k; Z
scp /approot1/k8s/tmp/ssl/kubelet.$i.pem $i:/etc/kubernetes/ssl/kubelet.pem; \
% S/ z- @( i5 H' d$ l: qscp /approot1/k8s/tmp/ssl/kubelet.$i-key.pem $i:/etc/kubernetes/ssl/kubelet-key.pem; \; j" Y  P, |4 T- Q. ?  o; W
scp /approot1/k8s/tmp/ssl/kubelet.kubeconfig.$i $i:/etc/kubernetes/kubelet.kubeconfig; \
6 ^# {+ ]1 g) S) G& }scp /approot1/k8s/tmp/service/kubelet.service.$i $i:/etc/systemd/system/kubelet.service; \
$ S/ P9 S  I4 z6 y4 ^2 T8 Q$ s6 ~scp /approot1/k8s/tmp/service/config.yaml $i:/approot1/k8s/data/kubelet/; \
+ [" G( T* Y2 d# B2 V0 r% cscp /approot1/k8s/pkg/kubernetes/bin/kubelet $i:/approot1/k8s/bin/; \# s. o' M- Z! h* l7 b; w
done5 @. C* f, T+ S' U4 z- _
启动 kubelet 服务& R# \  q! `" [
for i in 192.168.91.19 192.168.91.20;do \, b# k: ~7 t8 ]/ N
ssh $i "systemctl daemon-reload"; \5 o1 G$ E- V% p1 S2 p4 g+ ~
ssh $i "systemctl enable kubelet"; \
$ I3 ^, A9 }0 D9 i3 Ussh $i "systemctl restart kubelet --no-block"; \# V3 z5 a/ ~0 i
ssh $i "systemctl is-active kubelet"; \
  k! ], z- n* u2 c- {9 udone& t) a* b0 z$ X! ]" e# I( g
返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done. ~' Q5 w, O6 J8 f* t1 z
, ^; C8 s6 m" W- R' L& M" R% b3 p
返回active表示 kubelet 启动成功
$ b9 ~' Y* X* Q2 c9 D2 A! c- f/ D5 Q. J- A0 K/ }8 I) H
查看节点是否 Ready
, f+ }% W9 ?$ x+ D2 V; z4 U+ @kubectl get node
, c# c' [: h' p# C( |7 P. O预期出现类似如下输出,STATUS 字段为 Ready 表示节点正常
4 m; \$ _8 d8 c7 E6 m
4 B7 O" i5 P9 \% ?NAME            STATUS   ROLES    AGE   VERSION
$ a9 Q8 L8 M3 i. G) I192.168.91.19   Ready    <none>   20m   v1.23.32 L1 b; K; f7 v5 ^
192.168.91.20   Ready    <none>   20m   v1.23.3
* Y! p5 ?2 c) P% H; W部署 proxy 组件
. B& b  J1 X% o& _5 E+ \; }创建 proxy 证书
2 Y- D" M" ]5 H3 B. avim /approot1/k8s/tmp/ssl/kube-proxy-csr.json
, O3 t4 h: Q  r{/ B; F' V4 X6 B" N3 Y
    "CN": "system:kube-proxy",
- u- x% t9 @. d% Z: z+ h3 g    "key": {
# W: {0 H3 |7 e6 I3 W        "algo": "rsa",
( j$ u, ?1 z4 X' {- Y7 Y1 y        "size": 2048
" u% N) M+ i* E# t* A; f    },) t+ o0 A: h# j9 J  s! c+ l
    "hosts": [],+ K* Z4 f6 ~6 U% P' L
    "names": [
4 H% d, p' l3 E% m8 F8 x" D$ f# v      {( e' K) I9 [7 T
        "C": "CN",
" _$ Z8 @' [4 [# T+ A1 t        "ST": "ShangHai",
6 {& Z) ]/ l% [        "L": "ShangHai",9 G  g& ]9 P* T3 ^) [3 p( r
        "O": "system:kube-proxy",
1 ]: s! {" v' I' E7 Q/ s        "OU": "System": _2 P3 [0 r) I' c6 o& v
      }  Y. d- H: a0 e
    ]& q- @. n3 V, k% v" ~
}4 D5 B0 J+ ?1 G0 B8 W5 ~
cd /approot1/k8s/tmp/ssl/; \
6 {: j( R0 c8 @  h( _. l: ?cfssl gencert -ca=ca.pem \# _+ Z; n6 x) t4 e; M
-ca-key=ca-key.pem \
* \- j! a8 S3 M-config=ca-config.json \
5 U( G2 B8 {) N, T" k: Y' W& _) c) A6 z* C-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy6 U; F( ~+ Q. u4 n4 {6 x
创建 kubeconfig 证书! R4 s5 }$ `# L7 a7 L! k6 L
设置集群参数8 e) j' n) h) ?; c

+ p2 f6 f* F( H- v0 W+ D* H--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver6 c! ^0 F- S1 ~

* k; P- @+ x  Bcd /approot1/k8s/tmp/ssl/
  A; Y# \8 Z: F  G, s4 x+ p/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
7 \9 D# z/ {. n' d* V2 {  R$ W5 T& Q--certificate-authority=ca.pem \
; d1 _8 A, }7 F$ N# D& y$ H--embed-certs=true \
& e9 q5 ^& T, b8 Q# Y$ y) Y--server=https://192.168.91.19:6443 \
9 _+ i# a- P1 [- v/ A  j--kubeconfig=kube-proxy.kubeconfig
% E" R" S3 g1 x( Z& c( o6 Q设置客户端认证参数" G/ f; Z. D. u- `

( g* N- C1 v% n1 R& zcd /approot1/k8s/tmp/ssl/. L+ Q. P) X2 J2 ]* f
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials kube-proxy \" f" D' s! D+ w
--client-certificate=kube-proxy.pem \
6 W+ B; b- U! I4 ?, l  O- ?9 N; A--client-key=kube-proxy-key.pem \9 c9 C# q" `; f8 U2 J, v
--embed-certs=true \
. h4 G* `- a6 a0 I' x2 p3 I2 v$ G--kubeconfig=kube-proxy.kubeconfig
8 _$ q5 L9 w; k. b4 s5 M7 O6 n设置上下文参数( J* a$ I5 ?& A, w8 P8 [# s& l
, \9 l2 x) a  {0 P/ H
cd /approot1/k8s/tmp/ssl/
' a8 Z: _/ m( |* W+ j/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \4 ~% U& J7 v8 i7 C" t
--cluster=kubernetes \+ h0 Z* w! ~# j8 ?
--user=kube-proxy \6 D0 O! z7 z9 s/ B$ L
--kubeconfig=kube-proxy.kubeconfig! W/ L' K' j5 V& O
设置默认上下文' P& `2 C. s' F9 x  D
( L* }! R8 v) |0 G2 w1 K' N
cd /approot1/k8s/tmp/ssl/0 s" c0 c5 y8 `* X
/approot1/k8s/pkg/kubernetes/bin/kubectl config \, J% ?2 ~- W& Q8 I* {
use-context default \
) N- f$ Y6 ~1 N5 W--kubeconfig=kube-proxy.kubeconfig# q2 ^* ?) G  V$ _
配置 kube-proxy 配置文件
  c8 u9 ^; ?5 [5 ovim /approot1/k8s/tmp/service/kube-proxy-config.yaml.192.168.91.19
0 e+ u, T  j7 W, f9 \这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了% [3 }  y5 \+ O
# K+ n# V2 X3 L0 W
clusterCIDR 参数要和 controller-manager 的 --cluster-cidr 参数一致
) F5 p# k$ r8 m" M: a4 P
- h' R6 s" R* T8 `3 P. J" \hostnameOverride 要和 kubelet 的 --hostname-override 参数一致,否则会出现 node not found 的报错' C- {$ F7 \) U4 Z- e
( [; ~& j/ X. N6 j0 W8 A
kind: KubeProxyConfiguration
2 o8 {6 Q% m+ c+ KapiVersion: kubeproxy.config.k8s.io/v1alpha1
! I2 w% ^5 e! o) v" k0 \bindAddress: 0.0.0.0
5 x  m( Y, y' `! D3 D- m3 y5 ]clientConnection:
' b9 h+ ]# C; q0 A2 d  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"2 G* y8 W. C& S% D* h$ K1 q  g
clusterCIDR: "172.20.0.0/16"
( {- a) [& R) J: U6 j1 S: o  x2 a' @conntrack:
2 m3 S0 V6 Q* U% R4 Y- \  maxPerCore: 32768
  s9 K/ r% V9 P$ c  min: 131072
; K( x. X8 O/ T: P4 {  b2 W9 K  tcpCloseWaitTimeout: 1h0m0s3 p( d% R5 i- I! w3 ~. ?: i
  tcpEstablishedTimeout: 24h0m0s0 P9 B, S, ^, a. o
healthzBindAddress: 0.0.0.0:10256
% X. W  c" j. g3 q  ]! C/ ]3 RhostnameOverride: "192.168.91.19"
3 n& X9 z! @5 K0 ^( i+ P- C4 EmetricsBindAddress: 0.0.0.0:102497 D3 e# p% M# X% r
mode: "ipvs"4 z: H& w1 }1 Y4 i9 ~+ K9 |' i5 @" v
配置 proxy 为 systemctl 管理
1 U, U0 V+ U% l. H4 H0 Lvim /approot1/k8s/tmp/service/kube-proxy.service
* l5 k% Z4 k2 U& K6 `[Unit], G% ?7 _& t. J, Y
Description=Kubernetes Kube-Proxy Server4 Y! I' K3 C& d8 q
Documentation=https://github.com/GoogleCloudPlatform/kubernetes; {# ]3 @' x* T/ j3 u
After=network.target
4 @) {# K$ P8 P3 E; P+ c3 Y# `: [5 Y6 o1 z" H8 U# z
[Service]! ~% y/ b- R4 o2 p% B
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量2 Y) c2 M* [: q7 [8 i
## 指定 --cluster-cidr 或 --masquerade-all 选项后
' T0 Y7 s, t& Y## kube-proxy 会对访问 Service IP 的请求做 SNAT5 F6 r3 J8 d5 j! y- F9 F" A
WorkingDirectory=/approot1/k8s/data/kube-proxy: L- G, p7 D0 F1 ]: }4 r8 c1 J# P& Y
ExecStart=/approot1/k8s/bin/kube-proxy \- {  u2 I# q, f) s: E. g
  --config=/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml! h5 I& g; V) v7 Y- F9 [% r! V* P
Restart=always
% Q3 w8 F+ Z7 S$ HRestartSec=51 U0 R  r3 T9 c* I9 f+ {) h
LimitNOFILE=65536; D: g) J0 ?* e6 v& w9 `

& y& S9 w# T: m' u[Install]" g9 n; q& {0 M% m2 b
WantedBy=multi-user.target* l5 A+ b. t: {& X) f! D8 h1 s
分发证书以及创建相关路径, P8 h+ e0 Z% L* a. t
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
! v! d+ `* F6 e" n5 d, }' A5 U+ o, t9 M: d
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败, y; D" T8 P5 x
- W; _$ W! F# h1 n: v/ Q" c
for i in 192.168.91.19 192.168.91.20;do \8 \% H- @" D. M9 V8 q6 q  b
ssh $i "mkdir -p /approot1/k8s/data//kube-proxy"; \
/ k. }" R# C3 @+ |1 d9 I% @ssh $i "mkdir -p /approot1/k8s/bin"; \; g% I* \2 N+ k6 i9 e3 @2 d
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
9 [" O; ?- W! m: t: q1 x/ t& Lscp /approot1/k8s/tmp/ssl/kube-proxy.kubeconfig $i:/etc/kubernetes/; \
- d% n4 a0 _  F8 b9 h) x# ~6 Kscp /approot1/k8s/tmp/service/kube-proxy.service $i:/etc/systemd/system/; \0 e" \% l% |6 m
scp /approot1/k8s/tmp/service/kube-proxy-config.yaml.$i $i:/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml; \% a$ X3 X, e+ m/ E% P
scp /approot1/k8s/pkg/kubernetes/bin/kube-proxy $i:/approot1/k8s/bin/; \
5 A% x! E" k, \4 i4 {$ \done
, h; C. }8 @" L启动 kube-proxy 服务+ ~; {) Y3 d! Z2 ?" e$ \) x0 ?. G) G
for i in 192.168.91.19 192.168.91.20;do \
, j% _1 d7 a) A5 R- p$ sssh $i "systemctl daemon-reload"; \* \& J9 y% @, R7 F0 v; J' F$ V
ssh $i "systemctl enable kube-proxy"; \
; K; R- \% Z$ F) W0 ]% T9 ~ssh $i "systemctl restart kube-proxy --no-block"; \
2 P  Q' \3 t1 S7 F# bssh $i "systemctl is-active kube-proxy"; \/ H. P  {; w* I8 L4 q5 ]
done/ \! T. a" g2 \
返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done# ~2 y! j4 q1 ]) ]* g5 y( g

$ j) A& f$ U- s/ ]6 ~返回active表示 kubelet 启动成功+ \' a# m4 |! ~3 C; N3 T2 b
# l: Y% r' N9 l7 U9 r
部署 flannel 组件7 G* A9 a) V8 X$ |, K
flannel github$ j' _2 R% T& g. Q4 ^, f" J
/ B$ }! b2 v7 d* T' N/ R
配置 flannel yaml 文件5 a" o* `, k. O, ]
vim /approot1/k8s/tmp/service/flannel.yaml; Z- E9 P& E1 j& W  a6 S
net-conf.json 内的 Network 参数需要和 controller-manager 的 --cluster-cidr 参数一致2 i) U1 x) v( c# b. w

6 Q8 X* T6 s- H* \2 q1 [1 O---
* f6 t% J9 J; iapiVersion: policy/v1beta1
. z- R- ~$ j# t& ~kind: PodSecurityPolicy* E: q2 y/ p! D* z( \
metadata:7 f) [# C8 T& y+ T! U: A( C' T
  name: psp.flannel.unprivileged
9 n3 r- f% A; y) r4 R5 \  annotations:
& G( k/ ~* U' s# A    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default; K# S6 i3 c! H9 Y& e# P
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default  z; }8 v! c% u$ |+ k
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default) ?7 H7 g3 c  s8 j- M0 r! X
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
3 U0 O7 f4 n4 A) |spec:$ W9 B- {8 A  t5 ^- |( c
  privileged: false
  u; D# \% ?; v# @! Y  volumes:) ^8 D6 E0 M/ R( m
  - configMap
2 C2 J2 d" R2 U1 r  - secret# i& d8 Q& }* [6 E; _
  - emptyDir8 i6 G, i5 |; a! T" n3 h8 O
  - hostPath
$ C+ R6 Y9 A# k2 ~  allowedHostPaths:* M$ U6 b5 L6 g, U5 V5 S9 U
  - pathPrefix: "/etc/cni/net.d"
" C4 U/ S" g$ f) e; @' \4 `  - pathPrefix: "/etc/kube-flannel"  m+ n8 k0 L/ l# I8 Q$ l6 r
  - pathPrefix: "/run/flannel"" D. r0 L6 ~: n, f
  readOnlyRootFilesystem: false5 ~( b7 v  e: p: ^1 F; l9 Z
  # Users and groups
( M0 n) l1 f" z% T: ]$ B  runAsUser:
' O! g% ?! s4 X    rule: RunAsAny7 J+ r# p6 M( O( Y
  supplementalGroups:  {7 M% j9 q* H( h/ X' {/ i
    rule: RunAsAny4 s. ?- ~- @. u% ^& a4 w
  fsGroup:' ?2 v. o: ?% H, C9 V( o, Z: @
    rule: RunAsAny# x: H# V& K/ f- Q+ |% T
  # Privilege Escalation: x8 }% }# `4 o7 q" g8 m' Q$ f. ^9 O2 e# i
  allowPrivilegeEscalation: false
( ^. M9 P5 p9 t. f  defaultAllowPrivilegeEscalation: false
" g: A/ u  b9 s; s+ B  # Capabilities" `3 N% `. H5 S  X8 ]9 v  m
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
/ V( x3 y6 T: W$ {  defaultAddCapabilities: []
9 d# B, r3 E4 x- c  requiredDropCapabilities: []
( |1 @+ F& X; d/ v  # Host namespaces2 Q6 J! ]: R& c) u  I
  hostPID: false
" B% c* R; o6 N* d  hostIPC: false/ [$ Q& K- N0 g& A/ i9 x7 Z
  hostNetwork: true
( n& a9 e  B' j4 K  u9 H  hostPorts:
, I! ~  m) p; `! w  - min: 0; L0 w* |% _( ~2 ~/ ?# U* A
    max: 655359 Z) \+ O. u; S' ^
  # SELinux! X6 t8 b" x1 c% k) D
  seLinux:; _- |; a$ ]8 Y) X
    # SELinux is unused in CaaSP$ [7 V) B7 D4 z/ ]' p
    rule: 'RunAsAny'
) o* |+ m- o$ e" z# `$ k0 V" e5 @---
, i3 R0 o3 v  J+ F9 \( S+ akind: ClusterRole
' {3 U5 c4 C) \apiVersion: rbac.authorization.k8s.io/v1- t( h' ~. x+ ^, J3 \. `* n% U
metadata:6 _/ j$ N9 ?) Q- G4 I% O9 l6 I
  name: flannel
2 i  f1 a. x( C/ _rules:
- `9 G5 O& ~0 b- apiGroups: ['policy']& E( K/ t. K' t9 h/ z
  resources: ['podsecuritypolicies']
! a( S9 A5 C4 R5 d- O" o& [  verbs: ['use']
! H  m/ l: E" F9 M# p  resourceNames: ['psp.flannel.unprivileged']1 {2 }9 Q" [8 I0 o
- apiGroups:* G5 z, S. [, `8 Z9 n7 i- U
  - ""& U& G7 @7 N" Q/ ^0 w
  resources:# d& z- H9 Q5 t% ]3 b/ X2 U
  - pods
4 z; G( P9 e3 c. e5 W3 m  verbs:' l6 x4 J+ r% N6 R3 w8 g1 f
  - get& t# z  c! [4 y1 X! t# Y
- apiGroups:
# x1 c) C  i! \0 w4 b0 i  - ""
: J* v. r' J8 S( p' E  resources:7 l2 r6 [- K: X7 c# }  p, P8 v
  - nodes
( o; B6 o8 i  e6 B0 M" K8 Z  verbs:, f, U2 n2 x6 o: l5 W4 S6 H, @
  - list2 L3 z) {: b9 d0 d1 y0 w
  - watch
1 x( o0 |9 V+ g) N9 {- apiGroups:
( j% K, V/ [! O' h, |; q  - ""
3 P! ~/ m/ q5 b, M& {  resources:  M) \% a" X7 g. i2 B- X
  - nodes/status
, Y8 \. V: q& H8 @- r! L  verbs:8 ]% ]2 J" L# C3 P
  - patch
! z9 Y/ o9 C4 t- y( C: L2 [---8 D6 S, M# c5 R2 N1 O9 J7 {
kind: ClusterRoleBinding2 A+ O) s$ ?/ R9 }
apiVersion: rbac.authorization.k8s.io/v1
# M, w2 b' R: R) D# gmetadata:
+ S5 U2 c. Y& `  name: flannel
; B; ^4 [; w% z+ k$ v" OroleRef:- ]4 m; T' Z, d) k, M
  apiGroup: rbac.authorization.k8s.io' k+ ~# D- S$ s
  kind: ClusterRole
7 ~" ~8 f% F! S* r# S  name: flannel% b7 N" |$ R0 U3 |6 s, h- P& K% n
subjects:$ \7 N0 l, [. i6 K0 T1 q; m4 Z
- kind: ServiceAccount
6 b. J2 B* U7 U4 B) p  name: flannel+ w& p$ U, ?: [! g% Z
  namespace: kube-system
) y8 h' e2 P' r; d6 [& H2 U---" h" |, {( L# y: V# m4 U
apiVersion: v1
( B( Q# n5 o( U6 J' Z+ rkind: ServiceAccount
( t1 y& E2 u  U  J- qmetadata:
! ~; f: e: b( |7 c  name: flannel1 |$ m' t# o9 ^; k1 z/ p5 A$ n
  namespace: kube-system; N2 u2 ]1 i! S& T- c' t- |
---. N# X% b& g! b2 l! n8 q
kind: ConfigMap* x# u4 L: a$ X( {, ]4 a" K3 F! n
apiVersion: v1
5 A3 C' B3 {4 Pmetadata:' D: e% g% Q7 ?" d1 n
  name: kube-flannel-cfg: c4 |' T1 _7 N1 q, U8 O. a
  namespace: kube-system
+ n1 U9 P( x' }/ [: d  labels:
4 ?9 K2 g- j6 F$ P! E1 A    tier: node% e% `! V+ \- i9 p7 n. j6 n
    app: flannel
  S" G7 m% s0 ldata:9 Q: ~$ H4 d/ j8 o
  cni-conf.json: |# {% H% g2 N5 ~" N' V
    {! c: V. I4 Q6 ?9 D
      "name": "cbr0",3 X; t; x- Z0 f  s" s' b) ^$ N
      "cniVersion": "0.3.1",
) B4 J2 \5 w/ S, Y$ p9 Q# F7 X      "plugins": [
5 |, `; O/ R2 V1 Z) g        {
3 j7 X) r! `# p6 G          "type": "flannel",( ]0 A7 F$ A1 a+ x) l2 R
          "delegate": {9 C# d% `& ^& H7 P" w
            "hairpinMode": true,
6 X7 s& ^+ X; a            "isDefaultGateway": true
9 V  d0 O! h. f9 M          }# [0 Z, q; l0 o& h( s
        },4 N# R: {( ]' l& ^6 N( }' o7 `
        {9 e. K; @! u; b7 S
          "type": "portmap",
$ i& m, C. r- p0 P7 o& |          "capabilities": {% e+ s0 e0 `  w3 {7 w' c
            "portMappings": true$ z7 j/ ~. ]; S  u
          }
$ Y% b" h! C) l+ t' N2 K        }
; j1 j+ K: u7 s9 S& H      ]
' |& y8 L# f: O3 H- p2 b1 \2 K" f    }- w$ s, \* Z" H. V6 m4 @
  net-conf.json: |, Q5 |; Y# O2 j9 Q
    {% L  V" a# w$ J1 P  f
      "Network": "172.20.0.0/16",
3 v, |* I9 f8 k0 _      "Backend": {
; d) v4 J0 ?) V' Q  O0 \        "Type": "vxlan"
# i: J# }' \. U8 ^) {6 ]      }
* D# P4 ~6 s' B" Y% Y; N    }
7 Q" o9 m: t# t- C( Y, b---7 b& P9 X4 k% W9 m
apiVersion: apps/v1: {5 U- B) M8 J4 R# A, e: z- \  l& ]! t
kind: DaemonSet& Y: s) c1 S) \
metadata:# R3 C# s! _) R2 |( d& M) X
  name: kube-flannel-ds3 N, D/ K" N2 o1 L$ I6 g. R& S8 @
  namespace: kube-system( i6 d  E* t: I
  labels:
7 n  L; s, P' K* ~4 j6 `1 X; A* g! U    tier: node
5 T$ O6 t# a) h7 q: ^+ q& y    app: flannel* w1 ~. D2 I& B' J* G
spec:
) [. [# W7 v" R: {  selector:
( e& f0 R/ m7 n  g; [( D    matchLabels:/ i7 W3 p+ a1 p1 {/ w
      app: flannel
$ u( V. [: o1 I( S2 Q! F6 |  template:8 c, [, h6 Q! T/ Z3 {' F3 O; W
    metadata:
; g- V0 J& V: U/ m      labels:
; m5 [( E6 X, D1 e, G% H) b9 V        tier: node1 q6 l% N' G9 s" C- q
        app: flannel
9 @7 W* q8 g# e0 Y    spec:% S, U: j0 Y; G1 O3 E
      affinity:
1 R) |5 g- r2 N) a% v# U        nodeAffinity:' [, m4 ~: h- P: D2 S
          requiredDuringSchedulingIgnoredDuringExecution:
4 `5 J: ^+ ^5 @) h            nodeSelectorTerms:
- D. r+ n7 p& p* b0 W3 i1 w. x            - matchExpressions:; B( m2 E  S0 f" \; _6 T8 f
              - key: kubernetes.io/os. i0 `$ q& D. U
                operator: In
/ l- \5 J. f0 G- }5 Y  d                values:, s! t4 s; i) d" H1 S) Y
                - linux
& K. y( U% p4 p9 R& Z5 g      hostNetwork: true
: G0 T4 J' ]& S2 g7 I% |3 m+ w7 c$ }# X$ c      priorityClassName: system-node-critical
7 m9 C2 M) r4 q  e      tolerations:
! I/ j6 B$ ]- m' U  R      - operator: Exists9 V0 w( }- Z* j! C# p
        effect: NoSchedule, b4 R4 a6 G% G& D" I: e; v. A' Q
      serviceAccountName: flannel
' c% n$ \9 l5 b* l  [$ G" ?/ l      initContainers:
6 A9 O  ]" b4 U# O0 o      - name: install-cni
4 G$ T4 Y* J  B" v( f) f        image: quay.io/coreos/flannel:v0.15.1% Z' D9 X: a6 T# j  s$ n
        command:" h5 d! Z5 P$ @: `$ b
        - cp
$ o% x4 i* F: Y7 j        args:
- v- w5 U$ l: A6 C+ P        - -f3 b% p9 @4 d( j4 V9 m& Q/ D5 r- B! I
        - /etc/kube-flannel/cni-conf.json4 A" W+ K7 t; ^. V
        - /etc/cni/net.d/10-flannel.conflist
9 W9 p; r& {' r: C2 `3 k        volumeMounts:
; L3 b4 B6 Y& W! t2 m  I+ j        - name: cni* Y9 [9 J5 A+ \, [0 @4 r4 N
          mountPath: /etc/cni/net.d
* T" a* n5 X% `        - name: flannel-cfg
; c% o! n" M& ?# L* X! o% v          mountPath: /etc/kube-flannel/" A( @5 ?  n1 y9 t* F
      containers:. B: Y7 m9 a  k# i' i/ w
      - name: kube-flannel
& k, `' i" u: i. D/ t1 n, A0 A        image: quay.io/coreos/flannel:v0.15.1- I/ _- t  u! ~+ C9 I
        command:, w1 ^0 R: s7 X9 {; W! G# _; r
        - /opt/bin/flanneld" t1 f5 L* }+ Q
        args:
  [/ i% w0 m1 ]' y        - --ip-masq
& |, s9 U# U/ {, F' t! k! N        - --kube-subnet-mgr
& y5 ?  _7 w7 X) U" X        resources:1 q9 m! d! N7 u; D. b: `
          requests:) {8 I8 y4 B- T3 R, r! F' R; h1 g* Q
            cpu: "100m"
" C' c6 q0 T8 j' v            memory: "50Mi"2 m! Q3 z# l& @3 d0 o# r
          limits:! z9 z. _- _% s+ P) f) P
            cpu: "100m"
+ r' j4 Z. e* B' [1 ~, E            memory: "50Mi"! M2 I* W% r) d. O
        securityContext:, I1 f* ]# d* X, Y& }- L' R
          privileged: false7 Q7 l: E9 H; n; \' c! v
          capabilities:
, ~+ Y1 F- a& G3 e9 K, |( r            add: ["NET_ADMIN", "NET_RAW"]) g0 G5 P- G+ e& {) Z$ ]0 X
        env:
  t0 ~' k- s" S! l. b1 l+ _: e' i        - name: POD_NAME# P/ {% x: f, f
          valueFrom:* F: o+ F( ~- b4 ]0 I$ j
            fieldRef:
6 O, U# e) U' j& a( h# K* K              fieldPath: metadata.name
: m" B: m1 m) r6 }3 J6 _        - name: POD_NAMESPACE
8 i: `3 V3 V2 }9 B! B2 b3 B          valueFrom:, c* E7 b/ K9 ~- o- c( |
            fieldRef:
) k* G) q, b& m- A4 o' L              fieldPath: metadata.namespace
, f/ _, m7 d& U' ]9 k' {- ^        volumeMounts:
2 b7 d$ b8 {3 C% Y* J$ P3 {0 t0 P        - name: run' B& H- p& q+ w
          mountPath: /run/flannel
; g+ ~! J# U' ~* o6 N        - name: flannel-cfg5 R) O4 S: K7 c
          mountPath: /etc/kube-flannel/
2 O  K) L' F/ ]% J$ q      volumes:
0 M* i4 H  k: V  @! ^5 `      - name: run1 @/ [# ~4 i9 |. I: e. C
        hostPath:
) U8 W" V+ L* h          path: /run/flannel! x- U; J. x" V1 A+ @5 P1 t
      - name: cni
+ s  Q, I/ `- t, |* ^        hostPath:
, t% U) u& Q) p7 j; d# p          path: /etc/cni/net.d
4 E. @' s& b9 k      - name: flannel-cfg
1 J; O' N( K4 f8 V9 `        configMap:
) ~& P5 D# K* e$ s3 g+ E4 z1 F) J          name: kube-flannel-cfg
$ ?% T+ L6 D; O; H- f5 L配置 flannel cni 网卡配置文件
( T+ @; b0 Q, M+ D& b; g8 I9 D, uvim /approot1/k8s/tmp/service/10-flannel.conflist
2 @( ]: t: n+ O2 j9 @6 a{
2 C) ^$ M0 p+ B4 r4 l; x! B4 h  "name": "cbr0",
, @  P( p3 F2 I; @* I0 d0 q" g8 x( d  "cniVersion": "0.3.1",
  R+ c. Z( [, e, Y! O1 A3 d  "plugins": [. g; v+ J) J% n! x
    {
% x2 F2 ^9 V& B8 o      "type": "flannel",
# O' t" l/ ]" D5 {% Q/ _      "delegate": {
; ^$ w2 d! R7 h        "hairpinMode": true,0 Y- X: L/ v8 {5 t8 E/ t: {( w
        "isDefaultGateway": true4 F$ l9 |9 d! H
      }
! N: Z6 Q3 O% l& X8 P9 K    },
2 f$ Y" @- V1 _; S3 H. W  u    {
5 A) v7 p( E( m$ a      "type": "portmap",
' L$ C0 I/ j8 N8 N/ C& t8 U9 |      "capabilities": {* N+ _  Z9 A# L; p. A' x
        "portMappings": true5 w) X/ V4 y  d" j
      }* J! ~% Z. i& p8 o
    }/ J) L% H6 b- |) U# Q2 q. ^
  ]
8 A% C' P- {2 }2 y% i7 h6 O}
  \8 z6 o1 Y( M, X# V7 i; Z导入 flannel 镜像+ K6 {& ]3 o; {2 `
for i in 192.168.91.19 192.168.91.20;do \+ n6 C& F  u+ I8 P  T9 T) k3 |4 ^
scp /approot1/k8s/images/flannel-v0.15.1.tar $i:/tmp/
7 x4 N7 I/ v8 Kssh $i "ctr -n=k8s.io image import /tmp/flannel-v0.15.1.tar && rm -f /tmp/flannel-v0.15.1.tar"; \
  s3 e: B& o: H( G+ z- N6 n* X. M/ Gdone' ]  o6 w: s( `4 \  d
查看镜像7 Z: x* P; v% a5 t
1 e# x5 O  A( _' E" c" Z: V
for i in 192.168.91.19 192.168.91.20;do \  z' y. T" v& ~" v. |0 O  n/ ]+ \
ssh $i "ctr -n=k8s.io image list | grep flannel"; \
6 \- _" t% S" ], z( l6 d5 ^done
- G6 }: H7 W: u分发 flannel cni 网卡配置文件. c( s1 e. l6 s* M* J1 \$ ?' p
for i in 192.168.91.19 192.168.91.20;do \- r0 Y' Z6 t# b$ `" {3 \0 E
ssh $i "rm -f /etc/cni/net.d/10-default.conf"; \
+ C- S7 R7 Q/ Y  L8 \  Jscp /approot1/k8s/tmp/service/10-flannel.conflist $i:/etc/cni/net.d/; \4 `! C3 \# Z" L* Z& F
done: T* S* T& p6 D7 c2 v
分发完 flannel cni 网卡配置文件后,节点会出现暂时的 NotReady 状态,需要等到节点都变回 Ready 状态后,再运行 flannel 组件* e* N+ p% V2 g+ z6 z! y
+ ^1 V6 w1 G. _; T5 K- c. k
在 k8s 中运行 flannel 组件
) A1 f8 S4 }/ ~- G; m, zkubectl apply -f /approot1/k8s/tmp/service/flannel.yaml
' s% A) }8 d( k5 L6 d4 d0 l0 U检查 flannel pod 是否运行成功
* G- l+ S7 k0 ?" E& Zkubectl get pod -n kube-system | grep flannel. y) Y0 a! {( n( n# G7 {
预期输出类似如下结果
8 Y# X  c3 N% \
) T$ N( E- n$ g4 Bflannel 属于 DaemonSet ,属于和节点共存亡类型的 pod ,k8s 有多少 node ,flannel 就有多少 pod ,当 node 被删除的时候, flannel pod 也会随之删除* E+ a0 I' k4 b% a! e; Q7 U) b

" {. h/ I1 z2 }# v* ]kube-flannel-ds-86rrv   1/1     Running       0          8m54s7 `  @8 |& g4 u& z
kube-flannel-ds-bkgzx   1/1     Running       0          8m53s; ]# J3 n0 {9 U. O8 V# B9 _
suse 12 发行版会出现 Init:CreateContainerError 的情况,此时需要 kubectl describe pod -n kube-system <flannel_pod_name> 查看报错原因,Error: failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH 出现这个报错,只需要使用 which apparmor_parser 找到 apparmor_parser 所在路径,然后做一个软连接到 kubelet 命令所在目录即可,然后重启 pod ,注意,所有 flannel 所在节点都需要执行这个软连接操作
9 o0 `, A: o9 z: p2 D: A- M7 P' |$ M7 W+ M3 V& [
部署 coredns 组件
8 m1 A! c/ R& R! `4 C- d: F* m配置 coredns yaml 文件+ e8 w1 A$ [* m8 y; O) l
vim /approot1/k8s/tmp/service/coredns.yaml1 K# ?0 Y3 w2 b3 W
clusterIP 参数要和 kubelet 配置文件的 clusterDNS 参数一致1 ?) ~0 y% |( }5 r4 s: z8 X' O! r* V
$ M; @" D2 G4 E) v
apiVersion: v1
+ o) n8 W3 c2 R2 j2 ckind: ServiceAccount
( R! B' L* o% a- |; s2 M/ [metadata:. P8 k: \- ?  Y
  name: coredns
- X1 j6 G( W5 ~$ o6 k8 j& b5 F  namespace: kube-system
; I  r7 v- o& V$ _) I' G9 L  labels:& e  {2 h6 k8 o  O. P
      kubernetes.io/cluster-service: "true"4 B$ L7 O- ~& K3 s
      addonmanager.kubernetes.io/mode: Reconcile
% s, w4 X9 B7 r8 }, q---
/ g" z) `  M+ }8 papiVersion: rbac.authorization.k8s.io/v1
) O% V% c+ I; n: q! Fkind: ClusterRole1 O4 T- p8 x$ V5 V) @$ [5 A5 m1 U" e
metadata:
8 b# k( \/ f5 n9 J" u; S  labels:5 ]. \( R6 \+ O, v8 I5 @3 o# ]- t
    kubernetes.io/bootstrapping: rbac-defaults
# a! v' x. A& f2 m    addonmanager.kubernetes.io/mode: Reconcile7 `. b0 X; V8 h$ d
  name: system:coredns
+ _# y3 J; i; Rrules:
% k: f# q$ p" l2 T* ]& S& f7 V- apiGroups:
, Z$ z/ B& h* O* k9 i  - ""
0 ^* p4 q9 L) n) H: D) n4 p4 P  resources:9 N0 f) j* }! G
  - endpoints
5 W! m8 \+ t/ ?; U  v  - services
: I' Q: C# E! E1 ?2 Z6 {( H, [  - pods
8 V$ @7 ~0 N# p  k  - namespaces
7 ~4 l4 l6 V! j+ N9 m# y7 ~7 U0 n2 Q  verbs:; I, ?% l/ E! o4 T2 N" v  d7 J$ x
  - list
! z. d: w# C2 V$ _# Z: e) L- ?  - watch. @) W, C3 L  |+ \+ d8 a
- apiGroups:
4 _# m0 P0 j9 T- u6 e+ C4 b3 w$ i  - ""% b4 I9 @0 D- a0 a& v
  resources:
3 I( y* R/ ?4 R7 n5 Q  m  Z/ {$ E+ }5 Z  - nodes
$ B8 j  r, y* ~6 Q' T3 }  verbs:# I& a: a& V2 d
  - get* [; m# p) Q, U4 t6 {9 f; T
- apiGroups:
8 O) C; Q: p" }1 l  - discovery.k8s.io- f5 V" @, u' {- W6 T- F
  resources:* L# C, V5 c9 i7 D6 N! |
  - endpointslices$ K; t- C6 K' ?0 G- }" h0 b3 Z/ h
  verbs:  C! o0 v, P; R0 p0 Z5 ~# Y
  - list6 }: [  X5 z! U2 R! z( T
  - watch
3 z/ k* G7 M' y& d' ?6 @9 J4 X/ W# z---
1 G8 T3 s( ?/ N2 zapiVersion: rbac.authorization.k8s.io/v1
9 c- i6 s3 |; j: vkind: ClusterRoleBinding
7 `+ m" Q; A) O1 d6 S! J9 U, @8 ~4 Bmetadata:
* I/ ~) l# f7 E# Z. G5 ?, J& o  annotations:3 c7 j$ Z5 q" t! d- w& W
    rbac.authorization.kubernetes.io/autoupdate: "true"! x; D! A* `& e' A8 U8 a& K
  labels:
. S$ S7 s4 ^% ~+ V7 k    kubernetes.io/bootstrapping: rbac-defaults
4 x: t* K  W% C4 G8 J    addonmanager.kubernetes.io/mode: EnsureExists9 S  Q+ A& b* B. J: r4 W3 ?
  name: system:coredns  q6 z: V' O/ b; o+ Q5 l. H
roleRef:
2 A1 h0 z+ O, N  apiGroup: rbac.authorization.k8s.io: v" y- D3 D% B5 W
  kind: ClusterRole2 u& X/ B8 p0 D9 h  c: c. G
  name: system:coredns
4 w' S5 K% |* u3 L% \subjects:
& G  s: e# o% m( m- N" y- kind: ServiceAccount+ k3 ^3 }/ C8 Y5 N1 n8 ~
  name: coredns8 d: Y3 V% c, B& S
  namespace: kube-system/ R, A, E! c; E& [
---9 u7 ]2 l6 O) N2 d
apiVersion: v1
: O( S1 m6 ?3 J3 Nkind: ConfigMap
8 O8 g  z+ R' y3 x+ w$ kmetadata:8 R2 D6 C9 t. `6 D( u3 X) i
  name: coredns8 k0 Y' G3 b/ n5 Q2 v+ V
  namespace: kube-system
0 O% Q& j) u6 U0 x; w  labels:
; s. Y+ Y! F2 K- ~4 \      addonmanager.kubernetes.io/mode: EnsureExists
0 a; F; b. W6 D" o8 O, i" _data:
# Q( }2 p5 L# U, T4 |4 v  Corefile: |
! k  t+ y: U7 T/ H    .:53 {' I, @# l5 Q/ U  d, |: n' }$ c
        errors
) e; b, i/ m  z6 s0 h, E        health {
+ ~' V; l% s  I+ V$ N6 ]% w            lameduck 5s7 Y: n& B% U9 m0 p) k9 g
        }4 S; ^- [* K! V" ~8 m' b
        ready9 m8 ?# U* }' p; h# e# m
        kubernetes cluster.local in-addr.arpa ip6.arpa {: k, G" v. \, y! Z- \
            pods insecure
9 b9 W6 u/ @0 [2 r            fallthrough in-addr.arpa ip6.arpa8 B6 N" X% |3 i# C
            ttl 30
$ B. t* J; r- U( v        }4 L; k/ N* a8 U8 x
        prometheus :9153; {9 F  _0 }& w) F- ]0 ~
        forward . /etc/resolv.conf {
& g$ w) L4 w  `+ L# W            max_concurrent 1000
) P  f3 }6 ^6 d9 i; ^  u+ g* c        }/ v3 Z6 h( z, G& z9 M
        cache 30
& M2 U; P" G# A" |* W        reload; _8 W, u* y5 E- j# }' R9 u6 }7 R- N* w: i
        loadbalance- {7 u) Y0 z+ {$ k/ v/ |, h: J& S
    }: A: j% ^9 \3 I* ?' r0 [1 f' d4 ^
---
- Y4 I( \4 t9 T& x8 w0 x9 `apiVersion: apps/v1  t) {& A4 i) b2 ?/ d+ y- f
kind: Deployment
& r9 }" o1 D; \3 Q# k) [metadata:5 Q  L3 b" U2 C' F* Q  W
  name: coredns
* \' S3 H* c+ N  namespace: kube-system, O. Q* W6 O% C5 j$ J% D% _: E
  labels:
& K  s- y1 q* B  d    k8s-app: kube-dns* u* x( p7 @6 ~  b0 t" K& Y- J
    kubernetes.io/cluster-service: "true"
: a, x3 N  |% b    addonmanager.kubernetes.io/mode: Reconcile
3 Q7 s& t+ N, O1 H8 f8 X, M1 A3 r5 }    kubernetes.io/name: "CoreDNS"- v1 [7 J! d6 j5 @6 `+ B
spec:
! Y2 }3 i( ^$ c; k0 T. @  replicas: 1- ^- d: {/ a! c, P
  strategy:
; E( C9 d, C) u/ {5 {1 E& Y$ C    type: RollingUpdate; D# b" W6 u8 y  a) {7 G
    rollingUpdate:1 O/ M2 D4 j: ~* L4 E% P) R. ~5 F
      maxUnavailable: 1( H  I& L  c$ c- S9 K) ~" Q
  selector:& t( o3 a( Z- T% r, t
    matchLabels:
/ U0 p8 v& `6 y* q. R2 s9 ^! q      k8s-app: kube-dns1 y; r; g$ O' M% W0 i
  template:
1 [4 f' W; c3 E( m% u0 _    metadata:
3 D1 x0 e! Y1 n9 t7 N2 p  {      labels:
$ Q9 R$ M. g9 Y, g6 y        k8s-app: kube-dns# [- v$ x" j: m( E. W; i: l0 z) [
    spec:
" `! u) Q9 ~5 Y      securityContext:
# |' T- m$ [: D9 ^9 L) p        seccompProfile:4 ]& T0 C9 q' y% F( e6 r% N
          type: RuntimeDefault
, G% r7 ?  R) h! b; Z      priorityClassName: system-cluster-critical& M8 H4 ?1 E: c9 F
      serviceAccountName: coredns# P6 I! w$ y- F/ v# Q
      affinity:
) @( o$ M  @, i+ l' _        podAntiAffinity:
# L/ w4 o' m- y2 e* j% h; E          preferredDuringSchedulingIgnoredDuringExecution:
; p% |: X% M; R% d1 z2 F          - weight: 100
, o" g$ L/ G% M( f            podAffinityTerm:
$ d( @6 r* |! H' }  m2 Q              labelSelector:  w. Q1 L$ u; {  ?9 M
                matchExpressions:
* x7 ~' w; [5 ]; q$ e% g                  - key: k8s-app3 Z- K  Q' c3 D: O4 [) R1 F4 [! Z
                    operator: In0 I; A' U: u3 t
                    values: ["kube-dns"]4 I! N" O( W% O) q1 h. q& G
              topologyKey: kubernetes.io/hostname
$ r" z  k8 i6 V! S  v9 q% Z. j      tolerations:
! J* @6 Q9 L' C+ O9 ]. u        - key: "CriticalAddonsOnly"0 |4 C( _* ?# X; d+ [) B
          operator: "Exists"
8 l7 H2 {9 l& O4 s  f      nodeSelector:
% U# X" v  X' N( X4 M! D( F% j: K; K        kubernetes.io/os: linux
- o2 T* j7 p+ B  p2 i9 B      containers:2 G5 i5 _" _: ^2 ^
      - name: coredns
& @# J. X4 R6 V7 l' p( o3 s- ]* \        image: docker.io/coredns/coredns:1.8.6
- H2 L+ ~5 H$ h% F' w9 I# F/ `        imagePullPolicy: IfNotPresent, U' ~$ P" v  e( n# _/ u6 u
        resources:
0 z7 d# N4 J# R% B) [, b0 I" T) l          limits:
% P$ D* `- F' C. z            memory: 300Mi
* z3 j1 \/ a- H0 d, w* \          requests:
) ]) G9 Z% ]! e6 @' y' f* q            cpu: 100m) e  I8 Q8 G3 s6 r. P
            memory: 70Mi
0 g' f- J6 O: _! i/ s, n        args: [ "-conf", "/etc/coredns/Corefile" ]5 o* T5 I3 z" z+ T! {! G
        volumeMounts:0 v: g, p3 r' K
        - name: config-volume
" G5 e4 v6 f' P% a8 u: ]          mountPath: /etc/coredns, W& c. M  \6 x8 `! ?  b7 P
          readOnly: true
7 c& a: T# ~# ]$ L        ports:- L  Z5 k" i+ I: T! h
        - containerPort: 53
2 }9 F$ Q' J# L" v* a          name: dns
8 m. n7 v# z0 D* \% d/ H$ J          protocol: UDP- j- T% Z5 H7 T6 Z
        - containerPort: 53
6 g8 ]- e3 S. I; V; H- C$ Y          name: dns-tcp
* o5 y- Q# w; F, `          protocol: TCP9 q1 b9 Y% y' V2 H) z, ~, A' ?/ q7 Y
        - containerPort: 9153
4 Y1 R3 Z. |1 x3 H          name: metrics# m0 S6 T. V2 U3 p* R
          protocol: TCP* Z/ G' g9 W/ M* z
        livenessProbe:
" l% N* {: h" D2 K) ?2 M  U          httpGet:
* b9 U8 y* y* d6 s0 U/ n            path: /health
- K4 Q8 b5 X+ W3 q$ `  y& D+ \4 Q            port: 8080
: ~4 t" u; n$ c! H            scheme: HTTP
% K; Q( p$ ]; E" C          initialDelaySeconds: 60
0 S. Q! f" ~* J* m% w% t          timeoutSeconds: 5- p/ k' S+ d6 C7 @% B' N0 R+ Q7 ^
          successThreshold: 14 w. J3 I! T* u5 K: }. v8 R
          failureThreshold: 5/ m0 y5 `1 n' `0 ~1 m+ L& U$ b9 x9 i
        readinessProbe:6 G( x) i9 L- Y' D7 {; R; J
          httpGet:
5 J' N4 l# |0 @: x& {            path: /ready
( n* R; [3 {; N            port: 8181
6 F+ t8 f9 P; ^: m9 z9 x: W. k# f9 L            scheme: HTTP) u) p3 }* a8 V# ?/ E9 }
        securityContext:. N; o* m! Y+ a1 U
          allowPrivilegeEscalation: false
+ _7 j0 e0 c% O3 ~5 W0 V          capabilities:; v8 p( h. f; z5 A" ~( {2 L9 y
            add:
+ W! ?$ }8 d2 i. V- r            - NET_BIND_SERVICE
+ j" E8 k$ U/ G- J            drop:7 C& a( m1 F* z. R+ D
            - all
5 b- X. g. D! F2 T          readOnlyRootFilesystem: true
  f+ C) [! I: ^2 S      dnsPolicy: Default
9 p  _' Y2 q# H  w! u: A- U      volumes:
) A& q0 z( ]/ E0 Q2 o* |; P        - name: config-volume3 ~6 Y3 |: l/ a7 ~( a
          configMap:4 U9 w1 r9 Z1 \7 _
            name: coredns5 e' P: E+ O$ k7 y& W
            items:
# i/ W8 J! h( K$ U            - key: Corefile% b/ f: o: \% l) L# G, }2 [& L$ \
              path: Corefile
* V3 J/ m! W% j0 \$ N, `* W---
% E, g9 {0 i  Y0 eapiVersion: v1* M4 ]/ g/ l, z/ j! o1 T- w6 j
kind: Service% m, F* Z  i1 q% C. d6 {
metadata:
; y/ O! t: l$ X0 Q9 g1 k  name: kube-dns2 p5 S% M# h) j# B6 w1 L2 k1 u6 H! q1 \
  namespace: kube-system5 q7 e2 L; N4 J% p
  annotations:, \1 V  @: W9 v. g$ ]9 M# v  \$ l2 K
    prometheus.io/port: "9153"
7 d% A" Q! g/ O$ Z* z; E    prometheus.io/scrape: "true"$ o- R# r; g* A6 R2 R4 A8 d
  labels:' X" N; U5 g. D; h) q
    k8s-app: kube-dns
6 g: k; K. w: i% Y' A    kubernetes.io/cluster-service: "true"
; L5 J% M. o4 {2 N5 X" n7 h% g    addonmanager.kubernetes.io/mode: Reconcile
: u2 r) f& J( L5 _: ^# Z    kubernetes.io/name: "CoreDNS", _9 E4 c/ r3 ~
spec:, B5 `: D8 H/ ^; L
  selector:3 i* X* x* v3 u) m7 C1 Q9 T! R2 ~
    k8s-app: kube-dns
& O- J: t6 {6 h9 s, C) U  clusterIP: 10.88.0.2
; U- F0 w# \  c  ports:
4 L& c; i; _8 i' T0 R8 m7 @  - name: dns) P9 h1 p0 g& V9 g, G
    port: 53
1 O/ h+ w# d% H( r4 k8 r    protocol: UDP& b. S6 [! k$ v8 @
  - name: dns-tcp
. H: _" i5 W$ Y; n. O    port: 53
; R' s( M0 W9 s: v/ e) |    protocol: TCP
8 b4 a2 U8 P9 C! {. q7 @  - name: metrics2 y$ i9 E% _( R' f
    port: 9153+ `* Z" N, i. v0 R# `0 _
    protocol: TCP5 N. T$ |. Z4 ~0 B  T0 ?$ h+ C# R) ]
导入 coredns 镜像
1 S5 `. V' I& i. j4 a: V  ^for i in 192.168.91.19 192.168.91.20;do \1 u, h) a! ~' h* |2 h, ^
scp /approot1/k8s/images/coredns-v1.8.6.tar $i:/tmp/8 G& `. m- O: j3 b+ S+ P- a
ssh $i "ctr -n=k8s.io image import /tmp/coredns-v1.8.6.tar && rm -f /tmp/coredns-v1.8.6.tar"; \
/ ?4 F$ c- [: F) F: tdone% Q4 @( t" c/ U8 j* A8 b
查看镜像
# A: w& l3 `! C8 l" J  j9 @+ u- G
) j7 q6 X/ D9 U( d7 k0 Qfor i in 192.168.91.19 192.168.91.20;do \* ^- U% o# {1 p. K- g1 C
ssh $i "ctr -n=k8s.io image list | grep coredns"; \1 X' j1 E3 P$ J
done; p* \3 C3 ~1 H, U% U2 a7 l
在 k8s 中运行 coredns 组件
. s, _: ~$ e1 i+ ikubectl apply -f /approot1/k8s/tmp/service/coredns.yaml
  p8 E" s! o4 A$ M3 W" [检查 coredns pod 是否运行成功
4 |# p4 D4 t( j" }kubectl get pod -n kube-system | grep coredns
. j% x- b$ R4 {) A  @& W预期输出类似如下结果
( W( E9 I# P3 ~+ l5 c
/ S! E( }, \2 c4 P6 y因为 coredns yaml 文件内的 replicas 参数是 1 ,因此这里只有一个 pod ,如果改成 2 ,就会出现两个 pod) V2 g4 @4 q7 z0 s
! e( `3 i4 _% S4 l# H1 ?5 l9 S
coredns-5fd74ff788-cddqf   1/1     Running       0          10s3 x0 C$ J9 s* p5 ~% R5 n
部署 metrics-server 组件/ C& o- l) z; A' H& y
配置 metrics-server yaml 文件
: J. S5 F/ b, O3 e$ _# w3 wvim /approot1/k8s/tmp/service/metrics-server.yaml# x2 u& x% ~% A
apiVersion: v1, F1 s( P" i: k! `* A$ W, @$ X
kind: ServiceAccount
  i  Z' B- c" Vmetadata:
2 K' W; R) z. O+ Q4 ?- R  labels:
1 e9 G: u% F" M0 k- v! m% \" Y) F5 w& F    k8s-app: metrics-server; ]" \3 b5 s) \* `
  name: metrics-server0 b) Q# T# Y& K
  namespace: kube-system
" e: c7 g- B4 l6 t9 ~---0 R7 {! \0 o! u/ S
apiVersion: rbac.authorization.k8s.io/v1. g8 C: c, F3 m0 L* ]& W
kind: ClusterRole7 w3 T( ]: Q* }  Q! w# L
metadata:5 B/ N5 ?+ U* q( V9 N+ _
  labels:
$ W  l3 K3 \4 k# A+ ]/ G    k8s-app: metrics-server
! y+ ]5 z% a0 ~* C1 p2 Y    rbac.authorization.k8s.io/aggregate-to-admin: "true"
4 g4 ]) K2 Z! s, p$ R! n    rbac.authorization.k8s.io/aggregate-to-edit: "true"
0 \* q" D" ]9 I+ M6 z! z' j, [    rbac.authorization.k8s.io/aggregate-to-view: "true": B. r5 F% @2 X8 p. d
  name: system:aggregated-metrics-reader
2 i+ u: d+ u9 h2 ]0 G4 m+ _6 lrules:5 ]* v& V! \7 b7 [
- apiGroups:
& W  ^; R4 Z; L' x- ?* G  z7 _  - metrics.k8s.io
0 Q4 m/ X  {/ u! s" b1 E  resources:
4 H. c) H* t2 f% }: h% k9 L  - pods
  i7 q" P+ @. t. s5 d2 ]  - nodes  S* B; M: \$ U' j
  verbs:
+ s) e4 ^1 l" e5 E4 R  - get" C- I* w2 d# v9 b/ s
  - list! v; X9 [) ~' ~7 j/ A3 H* y8 d
  - watch
9 a. I1 T* Y& s: u. W0 Z---9 F$ M2 Q8 ^1 \* `; F  O
apiVersion: rbac.authorization.k8s.io/v1/ C+ H& e4 B3 K0 M; t* v; \' ~8 P
kind: ClusterRole% J) d# c% v0 E8 z! d+ M) a
metadata:+ U4 s7 h! H6 m: j; m3 d. Y' N
  labels:
% X5 N0 `* D7 V% T9 j. R, ^    k8s-app: metrics-server
, c  H8 ]* f, y  \5 r7 d. V& `9 R  name: system:metrics-server- z/ |; C) E4 \$ }" C  h% V
rules:
! Z+ U6 A3 T; a) `" k- apiGroups:
1 o3 Z* b! Q* _$ M1 N+ B. [, D  - ""
0 i  f! L& ~" c, Z7 S  resources:
0 A. a3 @' D- M7 i/ G5 {: h1 l- e  - pods
9 T; R5 E2 ]$ p4 a& l# m1 X  - nodes2 t, c  m" Q7 ]( L- ^: P
  - nodes/stats
& W* W& u) D4 ]; X  - namespaces
$ N& x: Q6 E7 G6 ]* C  - configmaps
4 p7 C) g8 |% P8 z  verbs:3 d0 ?2 T! Y: H6 n+ c/ _
  - get
3 [* I: G7 q7 |; f0 b8 T( ^  - list# h3 Y3 z% p! \+ ]" O
  - watch
2 P7 T, u1 M2 I/ |3 I7 M---9 g/ J% Z6 m; q8 {1 X
apiVersion: rbac.authorization.k8s.io/v18 q7 |6 v4 c! i9 D
kind: RoleBinding
; G2 R+ h" Y% N& }0 umetadata:5 c8 q- x& _' _
  labels:
, T5 k! t) \% ^' x, ^* }    k8s-app: metrics-server
; S2 n6 E: V9 u$ b( V" ]. L  name: metrics-server-auth-reader+ P4 x; C( F! T( \
  namespace: kube-system5 Q3 D+ N- ]$ G( v3 N5 n6 L1 t) n
roleRef:
. |/ O* ]: ~5 |, S' r  apiGroup: rbac.authorization.k8s.io
( _# K# r. f, r' c5 d  kind: Role
+ h( _5 W: j* r4 j" U/ e  name: extension-apiserver-authentication-reader
: Y' _, q( A. x* p0 ssubjects:1 D' e! ~3 l: s6 H* P4 B! T% s: U
- kind: ServiceAccount  l$ z8 q+ y. A3 U2 P, c) u
  name: metrics-server0 m( Q* ?5 i7 p; p
  namespace: kube-system3 K9 A  x5 r' p4 g* H- d
---/ D& Y& Y' J# f1 _" L) A& @) x
apiVersion: rbac.authorization.k8s.io/v19 g# j; B. ^" t& O" D7 G
kind: ClusterRoleBinding6 f4 J1 Q, @9 m# E6 e" s
metadata:) h( ^$ Q0 i$ L8 e$ @" q5 K
  labels:
5 G' U3 k1 k7 C# m/ O$ Q. X$ [    k8s-app: metrics-server
' t: L/ s5 K4 L' S. ?( k( ]; W  name: metrics-server:system:auth-delegator6 H$ |; G, n0 s) {2 O
roleRef:' X) V4 ^' \6 z& \; h
  apiGroup: rbac.authorization.k8s.io+ J" M+ a+ ~( u' M4 _8 U5 H
  kind: ClusterRole
, p7 `" T+ e0 t% V0 t* D  name: system:auth-delegator
) R/ s+ X9 |' Usubjects:
4 W( u# ^  ]: \' u- kind: ServiceAccount+ ~: b: }1 g% _$ q/ M5 i; R8 c
  name: metrics-server
- e+ r" g$ f6 X* e( q$ {  namespace: kube-system7 k5 l, S7 r' t9 Z( x
---
4 w- N3 u% G# F, L, t1 w/ uapiVersion: rbac.authorization.k8s.io/v1
9 x" c( Z8 d! jkind: ClusterRoleBinding
: c. {) o& t2 A2 M6 h. e9 u* |metadata:9 i- a4 O5 f2 j' J/ b
  labels:
3 I' C3 z8 U. I+ V) ~$ [2 L! |    k8s-app: metrics-server2 |) I$ Q0 \9 i  b
  name: system:metrics-server: |) u$ ]" n/ t' Z3 q
roleRef:( @8 l& d( T# t
  apiGroup: rbac.authorization.k8s.io
& u! E1 e, K  Y2 M  kind: ClusterRole
1 o# i3 L# \: x; \6 y  name: system:metrics-server! n/ d- _3 C0 R
subjects:9 q/ T4 {  H, F+ m
- kind: ServiceAccount) q) L1 Z) {! Y. y$ V
  name: metrics-server4 a: ?, `1 P1 y! G
  namespace: kube-system
( Y# }) K: s& D( W' r---
- m  `2 E6 L! `apiVersion: v1+ }( M* p) A1 I' V( G- N( A7 G( H
kind: Service
7 o# s4 `" m: f. b$ z3 Bmetadata:9 l% O& V- T% X3 c
  labels:. }" _1 H/ O8 |1 @8 |! R
    k8s-app: metrics-server! O9 `* [/ s+ k* q: p5 G
  name: metrics-server! f0 Z6 F2 x9 {8 ?
  namespace: kube-system& d+ G) |& g; `" h( V7 o
spec:( d# M+ X! N. `* V' q* ^2 N
  ports:- s; r" E2 I, i
  - name: https
3 I- N: A. R& r9 P    port: 4439 m' k% f: ^( N4 d" \, }0 C- M
    protocol: TCP
' o( m4 m  ~+ Z$ v: w6 O    targetPort: https
6 e: |5 s( c' `$ L  selector:
+ m9 I& y8 K1 C! h5 g4 U    k8s-app: metrics-server3 ^4 H( ~  B1 m/ Q1 Y* W6 L
---
1 u1 J/ m( D9 BapiVersion: apps/v1
+ V$ Y7 P1 }( ~, z* dkind: Deployment
5 A, N; w) A* ^# J: pmetadata:
, J$ g( ]$ a8 ^8 }" ^) q& H  labels:
2 B: r& \- G# f6 D+ u5 d- ?& F1 O    k8s-app: metrics-server
6 Y0 _+ k$ l, V3 w* v( N  name: metrics-server% r3 v' w8 K/ x- S# r- H9 Y2 U5 D2 u
  namespace: kube-system
7 }5 \8 W- N* @- tspec:9 ?3 [- S6 r! T
  selector:5 O8 y, P, t: Q: o' {0 Y& L
    matchLabels:0 g3 x- k/ Q( Q  y) [) U3 ]8 Y* N
      k8s-app: metrics-server
% l. P7 ], Y- q, W* d  strategy:! ~/ r4 C% X# W/ F& a5 y% k
    rollingUpdate:
7 z! V+ [  {* t& b' `4 f/ E      maxUnavailable: 02 T% \8 N/ y/ \1 [: U# b8 @, a% q
  template:9 g# J9 h6 q& ~- |/ u
    metadata:# W0 t/ p7 d9 s( _8 s! D- k, X
      labels:7 s4 |7 h$ ^: U; P, L& S/ [: w
        k8s-app: metrics-server: ~+ w8 d& c: h) M- \. {
    spec:
" X2 [* g- z  ~; `3 V      containers:
) u0 y( w. P. ~* s# L      - args:# V; D% [1 y3 K5 z' Z
        - --cert-dir=/tmp" b% K% |3 ]- g# G4 E" ?" t- w2 I) f
        - --secure-port=4443
7 u  Z- D2 \' d$ x/ R        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname: Z4 ~7 H9 {  k  C% Y1 U6 E
        - --kubelet-insecure-tls$ O6 q; F' r7 @/ S
        - --kubelet-use-node-status-port! e1 n# _1 g$ B! {
        - --metric-resolution=15s
/ c& A8 X5 y7 a+ w4 ?( C0 c4 R        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
9 L! d2 n6 w8 h) n, w. f        imagePullPolicy: IfNotPresent
; d* Q* O6 p6 `! a  x        livenessProbe:
$ Q& t5 Z; P" K$ d# }' m          failureThreshold: 3
2 I) U- R. u6 m9 w, u4 T! I* u. u          httpGet:
0 _* w$ f# j, p" K: V            path: /livez+ O4 A. I# i; B+ l! @
            port: https
4 c9 O" P& i$ @6 R- ^  N" r            scheme: HTTPS
% y' [. V) E$ A+ N0 B$ W          periodSeconds: 10
% M; b8 d" ]: |/ p        name: metrics-server/ h& P  [- t3 q+ s, c8 H) y
        ports:1 t, H& F+ ~* e4 {' r; A0 J
        - containerPort: 4443; N" l; e  O2 B# {0 K; r0 X
          name: https$ B* j: o3 k% y) s9 N, H. a% d; ~
          protocol: TCP. s* c. Z+ A# ~
        readinessProbe:
/ p% X; P% W8 b- W5 L  l+ U          failureThreshold: 3, J; O; t; Z. Q6 t' t" i
          httpGet:, K4 x  a. e$ g6 ?
            path: /readyz
( N& D/ p' ]' S+ n) O            port: https
  j# _1 i3 N4 r4 Y5 C6 z- [            scheme: HTTPS  U# K2 e% h9 O3 ~
          initialDelaySeconds: 20
$ M$ D$ {8 K# t8 ^% K1 H1 O          periodSeconds: 10
2 Q; l5 t0 F9 A+ U$ u3 r- z) _9 Q        resources:
# E0 H+ ?3 J, I. s0 \          requests:
; r! r5 Y& Z4 n            cpu: 100m
  ~" {" ~9 p; K9 y2 u            memory: 200Mi
: X7 L+ y  C* C3 V        securityContext:; g+ @4 W% e/ J, ?" P, O/ w% s
          readOnlyRootFilesystem: true1 r  ?: i1 R* Y" y0 a) l+ J4 D
          runAsNonRoot: true0 O+ D8 ^- Q4 R& [% W
          runAsUser: 10003 G* ^" ^, N0 E. M" z! P
        volumeMounts:8 Q* @( y3 }, Y# H# j# |
        - mountPath: /tmp
- H/ X) u' x! s3 u* @          name: tmp-dir
: p/ L8 p- k. v/ @" M1 Q      nodeSelector:) y5 a' j& Z- b2 C$ F
        kubernetes.io/os: linux
( ]: A* n  W8 U4 U! W, ]) s( r      priorityClassName: system-cluster-critical1 B! p" a% X! _3 s+ j
      serviceAccountName: metrics-server6 \7 ?$ u, p- ^, h
      volumes:, a- {. i% j2 L. U# \% G
      - emptyDir: {}+ [( ?9 B$ N5 u4 y- a
        name: tmp-dir
9 f1 @% m8 A) h8 P* p: t. M---
; Z$ t) D& |% y; A8 ~1 A# SapiVersion: apiregistration.k8s.io/v1) E: C8 h" w' f9 B6 l6 z
kind: APIService3 V1 W* k. l1 k) v/ G, b
metadata:
% S: N6 k4 u, ^8 d, @  labels:. O4 @, U0 ]! i- ?6 v: T! E
    k8s-app: metrics-server
$ v" a# @* M. X  name: v1beta1.metrics.k8s.io
4 ~  }0 {& {: R) H' h( Ispec:
( O/ ~* G1 W% s5 ^0 J) R  group: metrics.k8s.io# O) ~+ |( M- s3 s; ?
  groupPriorityMinimum: 100
* p* B! L5 `2 S  insecureSkipTLSVerify: true2 @2 z9 b2 _# V6 P
  service:) ^# A/ ]4 L) x$ ?$ F& y
    name: metrics-server
8 ]6 R) E, U5 X* v" `2 W    namespace: kube-system
1 I8 F. B/ X# H/ F  version: v1beta1
' w! W* b& @! q  versionPriority: 100
; i( t; y. p' n$ S  z/ I导入 metrics-server 镜像
2 E+ p) L- L0 x+ n$ ~' o. ~for i in 192.168.91.19 192.168.91.20;do \* f4 K4 @' p% |8 c8 i0 h. p- [8 O
scp /approot1/k8s/images/metrics-server-v0.5.2.tar $i:/tmp/
& H" `6 B6 h- `ssh $i "ctr -n=k8s.io image import /tmp/metrics-server-v0.5.2.tar && rm -f /tmp/metrics-server-v0.5.2.tar"; \
: _) a& f7 Z8 D! j/ Zdone
" D  X7 |" m, @/ S& F; X$ @$ u查看镜像8 F6 x5 M# y6 x! `, W% v; Q

, y! `$ r  T+ \  R3 Q+ Ffor i in 192.168.91.19 192.168.91.20;do \1 V' x- O1 B9 M9 E4 j
ssh $i "ctr -n=k8s.io image list | grep metrics-server"; \& L$ S5 o9 n7 d" o1 {
done4 m3 j  F" R! Z
在 k8s 中运行 metrics-server 组件
4 ?2 `  L6 A& z5 a7 y3 |+ M4 dkubectl apply -f /approot1/k8s/tmp/service/metrics-server.yaml
7 n; i# ^1 g) v3 E7 \, V检查 metrics-server pod 是否运行成功7 }* ?& d1 G/ n7 B/ J5 b; ?7 G8 Z
kubectl get pod -n kube-system | grep metrics-server
+ `7 ~! k6 i) m8 S' Q7 c预期输出类似如下结果! [& k( H2 f  L% Y
6 s; K) ]2 ?  F
metrics-server-6c95598969-qnc76   1/1     Running       0          71s! I  h' B' P/ V3 {# A% x
验证 metrics-server 功能0 g2 }; _0 V1 L

( y, k! P& n$ J6 O. L, D: X* j查看节点资源使用情况$ m1 J* W2 L1 o' G
; r, z4 @  W0 v8 g
kubectl top node
6 U8 L$ b9 C5 y+ S) X, s/ V$ Y预期输出类似如下结果$ U) \- F' q. q1 t% U  H; u4 h) ]- M
; ]: C' @" {8 i/ [. t- b2 l& s
metrics-server 启动会偏慢,速度取决于机器配置,如果输出 is not yet 或者 is not ready 就等一会再执行一次 kubectl top node
& [' U& {/ D& p* y
& t* A- h/ U& x7 q& U! Y* nNAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%5 W! \# g# {8 W
192.168.91.19   285m         4%     2513Mi          32%8 e! v: P# ^# f% }" l: I
192.168.91.20   71m          3%     792Mi           21%; u  \( C  h1 R: [+ D: n
查看指定 namespace 的 pod 资源使用情况
1 S% H& f7 J1 |' T5 W* Q% Y. s  e! S% M, u2 j
kubectl top pod -n kube-system- `% Q$ X% t1 z; c
预期输出类似如下结果
0 d% b+ }. _* j$ u' f
# T; O3 B( ]0 [7 V+ L% |! zNAME                              CPU(cores)   MEMORY(bytes)
; M* t' u" k* [! T. O( Xcoredns-5fd74ff788-cddqf          11m          18Mi; r1 S0 i" K+ P5 _$ m  G# ~3 U
kube-flannel-ds-86rrv             4m           18Mi4 n' w4 I) w, m( T  e, {3 W
kube-flannel-ds-bkgzx             6m           22Mi
4 b6 N# T) ?) q/ ~8 r: Hkube-flannel-ds-v25xc             6m           22Mi
" J- B+ g. v* L! z$ T! ymetrics-server-6c95598969-qnc76   6m           22Mi
- v1 Z$ t$ Y/ t7 g! ?- ^5 a部署 dashboard 组件
. ]1 w  G: G- O/ N; O! V, A配置 dashboard yaml 文件* a2 p# `5 P1 ^, E, L1 J( w  D/ B
vim /approot1/k8s/tmp/service/dashboard.yaml
0 o; G/ ^0 d7 O3 V---1 ^* `* Q; g) G7 |* \( S3 S
apiVersion: v1
9 |3 W2 N) O$ i+ y& d' {kind: ServiceAccount
1 M: W- ?1 C; `" o0 i% J" Zmetadata:6 n; |- ~8 |1 |; ?1 T* e
  name: admin-user
6 r( ~% W4 X* \$ ]4 B  namespace: kube-system: K) b( }8 E5 z  E( J/ l+ u2 e

% a* X+ G' D& |6 w9 J; l2 D9 K0 a---$ n$ U* H. U* E( J
apiVersion: rbac.authorization.k8s.io/v1
" h. R, W& I4 j% w1 k) I2 o: f; b5 `+ Tkind: ClusterRoleBinding# T1 j' y& S9 J& u2 Y! _4 l5 c
metadata:
+ m4 c( z9 M+ x( a# R2 ~  name: admin-user
- v) c. y6 y! n3 T: G0 mroleRef:3 \- p/ X8 Y; i# \& C2 H
  apiGroup: rbac.authorization.k8s.io( }4 {# g0 j% i, B, |& @4 o' l
  kind: ClusterRole
" O; T+ }8 ?; L$ G6 E. G  name: cluster-admin
0 y& ~, s! C+ t' q/ a( V8 U  Z5 xsubjects:  ]* x- z6 B. r- c4 q
- kind: ServiceAccount" w3 g% y8 a8 L8 z
  name: admin-user
& D& R: Z5 m9 q/ [0 @  namespace: kube-system
( D, r( r) e: h  c- }- `) q' Z' T7 U
---* M3 S0 P% U5 N" a6 Y
apiVersion: v1
. b9 E. \  k6 l$ |, Ykind: ServiceAccount
" a$ c5 g) g* r1 K* {metadata:
( B) S/ \2 j' g- o9 K; c  name: dashboard-read-user
+ T5 k4 ~2 K% F" g0 Q: Z  namespace: kube-system& j# W4 B0 o& P7 E3 f# \/ h
( I( l. w3 E1 T  B; e/ F
---
+ P8 \. N) a" G% ?apiVersion: rbac.authorization.k8s.io/v1. P8 w" f% X* m1 Q8 k8 C1 C% }  D/ U
kind: ClusterRoleBinding
, [% g. L0 c/ E* ^9 n5 z) \. nmetadata:
' N3 V. W2 }# Y9 x' g  name: dashboard-read-binding
% |% z" r+ M7 g( c; froleRef:2 [+ Y# V  o( d5 c7 v, ?
  apiGroup: rbac.authorization.k8s.io
1 F, ]8 P+ E) d% i  kind: ClusterRole
9 s9 ?8 o; h( Z- ~  name: dashboard-read-clusterrole
" W7 E8 p6 i( a5 h+ Nsubjects:, c$ i6 [& r5 n
- kind: ServiceAccount
7 S% l# S, A# n# a% _  name: dashboard-read-user; M1 n- o$ Y; [9 q
  namespace: kube-system! B, D- R$ {0 [
. m+ ^( _4 n" @; m6 g
---0 V: |8 l/ b2 h6 {5 y: b. N
apiVersion: rbac.authorization.k8s.io/v1
  ?7 u, ^; e+ G$ F1 Lkind: ClusterRole3 K5 G- k8 W' l- u- v3 A. O7 l
metadata:: x* B% N: G- h
  name: dashboard-read-clusterrole
! ^9 Z- o: `7 D9 Mrules:
' v* \: I5 s. I- apiGroups:
" x" r9 E" A! _: r$ s' G  - ""
- w) J4 c! v1 a% \4 W  resources:
5 i/ o7 F4 J& x5 u7 U" L  - configmaps0 h! l4 U, T, `/ ?! [: A
  - endpoints& T5 ^: ?' `+ T# i8 {
  - nodes
$ ]3 U( K* m4 t6 Y" }$ [1 j* W  - persistentvolumes
0 s. ?2 |+ C4 N  - persistentvolumeclaims
/ |- |4 R1 s4 w' k% _2 k, q8 H0 K  - persistentvolumeclaims/status
) e, n' \/ g! _7 b; ~  - pods. \" F  J# F  r
  - replicationcontrollers7 R* g! A* C! z& s0 [
  - replicationcontrollers/scale6 H" K+ R$ E5 i2 }
  - serviceaccounts
2 b, H& l, b! p! X  - services
& ]+ ~6 B5 Z& D4 V# p7 j9 |8 x  - services/status0 l* P( f+ s! r8 U3 X, k
  verbs:
- @+ `. W% H1 I9 G! C  - get
$ q' v# w/ \0 W# G6 w5 D+ |  - list
& d6 {/ M7 i$ u; t  - watch
8 P5 l& x1 u# k1 A! u! x, {; B- apiGroups:
9 o1 m5 k2 w# W7 [' ~  - ""% [  T$ D# k: ~% H8 C( e
  resources:; E+ h' M4 v/ u4 s/ g) p0 t: k
  - bindings# X( }/ A2 Y1 E$ B3 A
  - events% r% e% r; K; }) t& T# L- [* S
  - limitranges
" H0 G+ k2 Y7 n  - namespaces/status
, P" ~2 F- L4 z3 P, Q# L* Q  - pods/log
4 X1 Y4 ]( F: C; c& B9 Y6 [  - pods/status% f9 A5 T$ |3 w  S: Y0 v
  - replicationcontrollers/status) r3 ?$ E) ]- a/ p+ J
  - resourcequotas
- D, V+ ?4 }" h' z1 H  - resourcequotas/status
( b% O, J+ ]( d% X  verbs:$ S- [8 g' M$ f6 B2 Y3 C
  - get
' ?. a! U  K+ B8 o9 b8 Q  - list
9 I) x* {, X  `& j+ Y) {( e+ d  - watch, c2 S& I+ o1 n
- apiGroups:4 L5 N2 j  x4 |6 C3 b7 o
  - ""/ j5 p6 y  f/ ]5 a6 X8 ~
  resources:1 V( G+ H1 k/ c& R8 q' T7 s
  - namespaces# L0 q! D: G& s0 R
  verbs:6 \5 w- h5 o' m- p% m- {, {5 @' i
  - get
/ ^+ x" O, R$ J* X- |0 j2 ^# t* j  - list- t# @  d' j) {4 e$ b+ q
  - watch* y! y6 }: L3 _  d
- apiGroups:2 q5 P5 r; q5 |* g3 t
  - apps
. o0 d: v' q, z' v* M# s  resources:
6 d2 s. C! \$ O" d- c( ]  - controllerrevisions! `, Q* W! {$ o5 X: n
  - daemonsets
% O2 E* {/ ?9 D  - daemonsets/status, i4 v# b& T, j# J
  - deployments
$ _$ {; c' L& X; H  - deployments/scale% ~" I2 C6 K6 O; y% p: _! u/ B
  - deployments/status
1 A) d; c. Q  _3 |' B8 Z  - replicasets$ s$ q6 X3 Y# G7 L9 \. @
  - replicasets/scale2 ]6 |# L. {* L, }  A. r8 {
  - replicasets/status. q, c  I& x7 e4 A  \, V- z: `8 |
  - statefulsets
+ I7 p4 ^3 Z8 V  z# M; o  - statefulsets/scale
$ B+ R# X4 C& K9 q$ h, w8 S+ }  - statefulsets/status
( h6 h. v. H: X  verbs:1 e4 I  _4 }# R% ]! G) x8 v
  - get
2 Q  o! a5 g. _' Z& L2 l2 w  - list5 E( c" l. e% U$ C! h
  - watch' q/ G4 y# U* o1 ~1 t4 |4 @
- apiGroups:% l; d, f4 A* ^* m) f9 M# Z, n
  - autoscaling
# G1 L$ i1 }+ i' t# Y1 W  resources:3 g, c; N: L7 g$ @5 v  x+ ?1 q% s3 e: L
  - horizontalpodautoscalers
; g( h2 @, r4 F  - horizontalpodautoscalers/status4 G  T* Z5 }$ v' O: U
  verbs:
6 f# a" ~4 @) f( H4 g& N9 k. h. w  - get% f- d* E7 @7 }$ u) V
  - list" z) P0 s6 K+ i
  - watch
) y4 S1 G/ p% A9 @- apiGroups:
8 o3 P8 J0 z8 _( A6 c/ N  - batch
9 N# s9 N8 x+ n5 e: \  resources:
  }2 ^. w5 j- w: X: v( q. y' M7 L  - cronjobs+ q4 T) g# S$ P3 F# @$ \
  - cronjobs/status
$ F$ s; |6 p4 Z# c( m  - jobs  z8 a6 O+ Y3 m
  - jobs/status
/ ]1 P1 F& T, {" p; S, |  verbs:$ Y  Z; X& O$ e9 Z; a( A+ l6 \+ K: \
  - get
) y4 s3 _: \  @8 f( x1 V# n  - list
3 y0 h# {8 D; T3 j  - watch2 e5 L" _. B, m" e) e
- apiGroups:5 z0 u$ ^- c: N" j- G' g
  - extensions) e- ^4 @" K( e! k( b2 G; o* Y- N$ \
  resources:- U0 M, }' }2 D. `; X9 V# ~9 ?
  - daemonsets
7 O" v5 M; `( l/ `  - daemonsets/status! j; v& G! T2 }0 ^4 k. _9 u
  - deployments
  a/ ~: S6 p. n+ @7 I  - deployments/scale. l2 b% A* r5 E; {3 D
  - deployments/status
& n6 H! \9 O+ ]( X- O$ j2 g! c  - ingresses- }3 ]1 J- F. }' c
  - ingresses/status3 V+ a7 F$ W& k8 z
  - replicasets
6 X; n; p4 I! P# s$ h  - replicasets/scale
" E+ w4 d$ K4 V0 Z1 o  w  - replicasets/status
# }7 E: O& O& |! e, Q4 D  - replicationcontrollers/scale
$ E# {& L5 U* i1 F+ A6 e  verbs:; k7 J( N9 q9 \. y+ N
  - get
$ E) Z* H1 ^6 s# B; S( t& _5 s  - list  K& r- y, K  A/ P- C) ^( q" h
  - watch
/ S5 W4 D- ?& l5 t2 D- apiGroups:
$ U- `( A' y" Z5 B' y! G  - policy& B1 F1 I. u) P' J$ t  q" U9 P
  resources:
( V0 r0 q/ |) K2 @7 T  - poddisruptionbudgets6 U5 J! A4 ^/ }! H7 D2 w7 M0 a/ ]
  - poddisruptionbudgets/status
  w/ B. p9 v  L  verbs:- ], ?5 j  ~2 V
  - get
/ u. c* c- q! m% Y0 A  - list2 b: `+ `+ p- E0 l' B: ^
  - watch
  C2 K1 V4 @! A2 P) `1 A- apiGroups:
. Q# I1 K) v+ d! K$ \  - networking.k8s.io6 t0 b. r4 K$ D/ v" F* I" r! V
  resources:
0 s1 Q# n6 R" |' P% ~& ]- l* z' P  - ingresses
( a" c7 b+ N% B' ~; K& _$ a  - ingresses/status  a4 r  n! e" d0 ~' D' T
  - networkpolicies
4 S* f. X0 B$ p* `+ e+ G  verbs:
- F: T1 I* A+ K4 L* l) J  - get- U7 v1 d2 t! _
  - list
2 |. {7 _) j0 }' O  - watch
+ ?! M5 _& N; y+ Q- apiGroups:
* Z3 _. S0 R8 u' ~; o0 Z  - storage.k8s.io1 A* j/ T4 d& |  z" d9 E
  resources:
8 t8 h6 @* Q3 J( K1 }+ S  - storageclasses
3 z- U+ o- `, D7 g  - volumeattachments
5 q- i4 R- g' u- p2 U2 L0 i3 H% T  verbs:' m6 v) V, Y4 |+ j0 B7 p! `- X
  - get
! s4 X" e4 d" U  - list
# R8 X9 U1 T# r$ r, y3 ]1 Z8 |/ v, {  - watch
9 Q1 _! i& O6 t+ K: z8 k0 `! z- apiGroups:. V2 h3 D6 \# U2 X+ \, ]
  - rbac.authorization.k8s.io
$ q& r+ b$ R+ M1 T  resources:/ W& V6 N6 Y4 [) y- p9 H
  - clusterrolebindings
% `& a5 v8 n0 o+ u& t& `  - clusterroles5 y" R1 g/ ?0 ~8 T' F
  - roles( ~+ R$ V) S0 K9 V: _
  - rolebindings% O' R) \6 `; t, Q
  verbs:
% v1 y- ^3 }1 H/ m  - get
# F( ?9 [# x  F( }0 G, ?, O  - list
/ F3 v# M: p& T( z  - watch( y, A" X* @+ _" Q

  C9 |8 ?2 D2 D8 E6 B; `/ H---
9 }# @) L* k. rapiVersion: v1
! P% N: L& m" {, F1 B. X. G" P# Mkind: ServiceAccount
! H. _& W9 v6 _9 Q+ {' p4 ?% Umetadata:
' U& P! e' i3 Y2 \, ?6 ]0 r  labels:( f8 e0 }- A, X" j
    k8s-app: kubernetes-dashboard
4 z, l" x" E: p% w  @. y  name: kubernetes-dashboard0 w' F- S6 i: @, s
  namespace: kube-system+ b6 ]! L% D2 x3 \& J

1 p6 _2 b1 h2 N---4 |' r  K- p* J  ]" L
kind: Service& G& r: B6 q6 ]
apiVersion: v1
3 S5 a* i  @# x6 z* }metadata:
, A, w2 U* f# e4 i0 T  labels:4 R9 L% ]8 {% a+ g3 ?
    k8s-app: kubernetes-dashboard, s; p- d6 |: X
    kubernetes.io/cluster-service: "true"
1 j0 U4 e6 ^# y& _: [' I  name: kubernetes-dashboard# q9 q# r, g) W: h  H
  namespace: kube-system
+ h1 e' k2 n4 E7 n" e: Vspec:
# o& V  K) E5 i9 C; d& ]  ports:: H. e- ^2 _, x+ ~. M- M  K. c1 a
    - port: 443/ x, e: N, Y* U2 h# R! v
      targetPort: 8443
* ]/ x. ]2 s8 E" Y: K. z' E: q: _# V. T  selector:9 K* a+ h- ]! M6 W
    k8s-app: kubernetes-dashboard
6 H5 [5 M  c2 E7 _  type: NodePort5 X/ Y- T3 @, h& Y3 c. E( H8 `. i
" W6 h  R. O- ]6 P. P; j
---
& A! ]3 e2 F% ~; r% _apiVersion: v1
* ^5 \& ?/ m) pkind: Secret2 a- A/ O9 F+ W6 }$ h3 f
metadata:
- n5 e9 t+ b1 r0 u( ]  labels:
0 x) K3 O9 w# |2 O# s    k8s-app: kubernetes-dashboard
/ Z  G. T; q/ R: j8 |  name: kubernetes-dashboard-certs
& Y% y) y0 j: |! c( }7 F  namespace: kube-system
0 |1 t0 k3 p0 }; W( ~6 Utype: Opaque
6 m3 U1 L" H$ z& P7 [! k$ u/ H' g; W- n1 u
---
2 b# }; {8 ~9 G- x/ FapiVersion: v1
1 S2 J$ O$ y. P5 k3 ?* Z: [5 _kind: Secret3 z  @( R! U  Y
metadata:4 t& z. C# A+ Y  }8 G* C; P
  labels:( Z1 W; N: [& u: ]% k5 N
    k8s-app: kubernetes-dashboard7 `3 j% v2 \) l1 U' j% a
  name: kubernetes-dashboard-csrf
# }. ]  J; q. K2 n  namespace: kube-system% n9 i% B6 E0 X, \5 a: P8 ]
type: Opaque
: V" \1 d+ q2 C  {' W+ Z2 z5 _0 g+ wdata:5 o+ h6 f; ~. [+ T$ a4 T$ ^) E
  csrf: ""
5 A; f- d" Q# R+ u9 R
. R8 [" M1 B5 ]  ]6 r, k' ~. ]---
. z3 L' o/ u8 w5 Z4 x. yapiVersion: v1
% N& U3 P$ q- F- akind: Secret
8 z) A  X, n" `) hmetadata:
& m! ]0 W( L7 o$ H; [1 T  labels:( y8 d) ~) ~! g- ]
    k8s-app: kubernetes-dashboard8 N6 X1 v0 ^' u/ c: y
  name: kubernetes-dashboard-key-holder5 G# j' q1 }2 J
  namespace: kube-system& H8 O5 J1 \( |! O
type: Opaque
$ O, B* z. a6 l# R+ E7 d9 ~% W; L& i, R
---
; J/ ?, f$ C/ f; xkind: ConfigMap) d& |! x$ r8 M+ o9 I
apiVersion: v1" w, @; T$ Q) p! m. d6 M
metadata:
  @- s0 D, X* M/ K6 k4 v  labels:* i: D2 u  z, ~1 M" S: U
    k8s-app: kubernetes-dashboard# N0 ^8 y7 N5 v6 q5 Z* M, t) F/ q
  name: kubernetes-dashboard-settings
6 d* K- E* S# `0 ]: M  namespace: kube-system
: ?* _1 l: [9 O# `6 h9 Q7 k  Y
# W; |7 o  }4 ]1 R5 [  Z---4 ]' s" l( V5 }
kind: Role, G4 Z2 @% d: U+ W- R6 r
apiVersion: rbac.authorization.k8s.io/v1
4 T5 l% z, C/ u( jmetadata:
) b+ q0 l0 R6 w" }( B  labels:! j3 m6 z1 W0 M+ f
    k8s-app: kubernetes-dashboard# f. M! u; W$ P2 {5 H" Q
  name: kubernetes-dashboard
! W4 ~! A- `1 Z2 B  namespace: kube-system* c1 w( W6 u- Y
rules:$ i9 g( F4 ?9 ?" L- B6 i
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.& f4 d8 e4 \" ?2 y
  - apiGroups: [""]
& Z4 j% }5 `9 f8 Q) o# N    resources: ["secrets"]& ]% J) T7 H8 n; O" N6 Z
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
3 @0 f5 D7 b. T' g    verbs: ["get", "update", "delete"]
# F9 V+ S0 m* O! P1 G    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.2 `* M, i& m5 G0 Z4 r; a9 ^
  - apiGroups: [""]
) a) O0 E) j; n# T: k+ o, r; s: a    resources: ["configmaps"]
' u4 C3 Z: \) K# n8 f: [# R8 k7 J1 C6 i. s    resourceNames: ["kubernetes-dashboard-settings"]
5 A" w2 K2 X) F    verbs: ["get", "update"]
% D8 t, ~# J' L6 K! C' @: m9 @    # Allow Dashboard to get metrics.
1 O0 _2 H/ Y0 H$ r* \  - apiGroups: [""]
/ R0 N& C* N+ V2 H5 [1 }) U1 t    resources: ["services"]1 f! O0 f4 O1 y- x# Z
    resourceNames: ["heapster", "dashboard-metrics-scraper"]8 o) d( e6 U) l6 a. g8 J3 M; B
    verbs: ["proxy"]6 R& V5 A' i/ `3 _5 h  r
  - apiGroups: [""]
3 `9 T% y5 O# C; G& o3 E    resources: ["services/proxy"]
- D7 d$ M. y. `$ t3 E1 z    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
# X* J* k0 G% q" p7 ?    verbs: ["get"]
: P  E4 q0 U, L: ]2 e6 r6 u
" u$ O$ B$ d# b6 H1 I" @---
& z4 ]% }; R, @3 v5 \6 a4 ~kind: ClusterRole
' o8 k2 u! K4 B5 g7 \apiVersion: rbac.authorization.k8s.io/v1
+ K7 h: T( U5 t( k: B: _metadata:. L; m' g. G/ V! O/ t# \; C
  labels:
% V8 M2 Y8 F0 m& {1 z' B    k8s-app: kubernetes-dashboard$ h  V1 Z1 r( s: F. e
  name: kubernetes-dashboard
: c5 x! v  G$ \8 j+ c- V& ~rules:
% X4 w* i; F" C" A  }  # Allow Metrics Scraper to get metrics from the Metrics server" h! R+ D) J/ g9 {& m) X5 g
  - apiGroups: ["metrics.k8s.io"]4 h, B' A7 p; M, W5 f& b: Z6 _
    resources: ["pods", "nodes"]2 X$ m( |6 K7 E2 M* M2 p/ ?' t
    verbs: ["get", "list", "watch"]# \" p' @' C* U, E4 K' d& K

2 o, s* f! U# L8 T) u---9 b" n: p2 d: V* F& u
apiVersion: rbac.authorization.k8s.io/v1
" s& C! Q: I( H; f, Fkind: RoleBinding; R) r  @# k- ^& i  v
metadata:
: G6 M: x  g! g  V9 `3 N  labels:, j) x. X1 _: p4 A5 X
    k8s-app: kubernetes-dashboard
# U0 p/ ], o2 a4 c1 U+ q; x$ v  name: kubernetes-dashboard; Q, d2 r' \' I. C% _
  namespace: kube-system' o; e' U" D! ]. S2 @, ?* g
roleRef:6 W7 k- j4 U: m0 `* m
  apiGroup: rbac.authorization.k8s.io
# |: K$ n! Z9 j; E! ]  g; E1 g( j  kind: Role
" w% w0 d5 ^* b8 T. t0 |4 }' g  name: kubernetes-dashboard9 K" M3 O2 T0 ~! A
subjects:2 U* r% R0 }2 _+ v" m1 E4 |3 t3 m
  - kind: ServiceAccount* q6 N9 b6 y1 Z3 |
    name: kubernetes-dashboard
7 C. f( [* z- b' N9 f    namespace: kube-system8 q7 P9 r: j' f6 A. H. V* ]
' Y. P. r* R* x! C, N. N
---: z" t5 \' i+ `% g
apiVersion: rbac.authorization.k8s.io/v19 i- [8 ~4 Y: S9 `9 ~9 H
kind: ClusterRoleBinding
1 @2 a6 i. M+ t- bmetadata:
3 e/ U& v$ q% v6 E. d  name: kubernetes-dashboard, F: \7 C  \4 D& n4 D' {0 [' O6 E
roleRef:0 i& j* N: ?8 [8 I9 S; r
  apiGroup: rbac.authorization.k8s.io
6 X9 ^2 R' }; W; F' V- S  q, D  kind: ClusterRole" C) w4 C2 s# V8 Q
  name: kubernetes-dashboard
; U) B/ e0 g# Q  N) q( psubjects:
+ ]7 e) G0 d' i. {: M+ |9 t  - kind: ServiceAccount
3 `% Q$ s* Y3 V4 Y$ @" g9 U' X4 P    name: kubernetes-dashboard: e4 C) i# f1 a# S0 K
    namespace: kube-system
. a; N* P2 D4 g
1 b# h- z# W: \3 z7 H---" N3 [, b! d. l
kind: Deployment
" y5 u  U1 I( J+ J8 a2 j: ~apiVersion: apps/v1
% |+ k, w7 t- Q/ ~metadata:
; D5 H" M! d& r/ x* s8 s  [! y  labels:- P. J, x/ V- q& j0 ]0 m: Q
    k8s-app: kubernetes-dashboard
6 ]+ J5 D9 I/ V9 ^  name: kubernetes-dashboard
, @3 U7 S  L3 q$ Q* U  namespace: kube-system
2 F. v" u! H* R: P$ g1 y' Yspec:  P8 j+ x( U5 x
  replicas: 1" C2 O$ Z0 A: x9 c; M2 `4 M- R
  revisionHistoryLimit: 102 h* X% H- U( z+ b' n
  selector:
1 z# L$ S0 x: u  }; q% _# Y    matchLabels:
/ p. c1 k: h6 ?' D      k8s-app: kubernetes-dashboard
0 Y/ \5 K* C, X  template:
( v7 }; A: `2 j; ]& @0 f* c: i0 z' F    metadata:
  `: v; ]$ e1 U1 Y, o! D      labels:% c" X6 H+ x# P
        k8s-app: kubernetes-dashboard
- ?6 K) k, Z2 Z. ~    spec:: r, c2 t& G, i2 g1 ?# v2 s! F" u
      containers:% T0 W& C( z, P7 H
        - name: kubernetes-dashboard
4 {& H% J% n/ _- W+ e          image: kubernetesui/dashboard:v2.4.0
% A2 @" ^; ~6 y3 ]# S# F          imagePullPolicy: IfNotPresent# q# x* F# |0 D  Q) h# D
          ports:
; F* X0 b- Y4 W4 f            - containerPort: 8443
; K; N. O9 Y7 t# x) b% X              protocol: TCP
! u7 o5 H6 c' {          args:
3 u4 F/ q- e, z$ a/ x            - --auto-generate-certificates: u# p7 A: C( V4 z( ]  g
            - --namespace=kube-system
: j0 T- x& s$ N" }+ l            - --token-ttl=1800
- J/ @1 y# q8 j  A8 V. a& c            - --sidecar-host=http://dashboard-metrics-scraper:80009 z- y6 x' N9 Y
            # Uncomment the following line to manually specify Kubernetes API server Host
0 B9 _# I  z0 v/ }, H& X. S            # If not specified, Dashboard will attempt to auto discover the API server and connect
8 M$ \+ e# ?3 }5 a# ?            # to it. Uncomment only if the default does not work.
) e- |  u0 Z9 a9 k$ Y            # - --apiserver-host=http://my-address:port
9 O! B; g+ l0 j" \; @0 G% C" Q* A          volumeMounts:& ?4 Y6 j. g( ^; W$ B4 h5 }
            - name: kubernetes-dashboard-certs
- Y7 ]8 ]: d' }& F9 c/ T              mountPath: /certs
4 \1 b, g# E9 W# H* _! G              # Create on-disk volume to store exec logs
/ \8 S8 n. q! _' t, K) D3 z/ n            - mountPath: /tmp
7 y9 U8 P2 V8 l$ b# @* L9 V              name: tmp-volume: P6 ?1 a2 X, M
          livenessProbe:1 _7 o* v, N0 L# s0 n# ~
            httpGet:
, |% g2 z0 H/ }7 ~: l( Z6 L* z              scheme: HTTPS
/ w7 M) e) _& D. d              path: /
( i5 ?$ k4 a. I              port: 8443
7 a, E5 Y, |8 B+ y' ^# F            initialDelaySeconds: 30
1 E4 s/ e" o  d* G            timeoutSeconds: 30' ?  J1 D, R( K6 P
          securityContext:0 j& f8 u, H  ?' P7 W4 }
            allowPrivilegeEscalation: false
7 Q$ G& h  V/ F+ U. c- E* M5 S: v            readOnlyRootFilesystem: true9 r5 Y6 [* }% V( F4 C
            runAsUser: 1001
. h( U- _* D7 O" i; s  L            runAsGroup: 2001$ ?- q1 |) i, ~& f
      volumes:% v' t+ h% ]) _0 d* E
        - name: kubernetes-dashboard-certs
, V) L2 e7 q, ]4 X* n* U          secret:
3 T7 F( S) O# i: O4 _            secretName: kubernetes-dashboard-certs$ u, y" r4 ]8 \+ h
        - name: tmp-volume! D& R# x3 q6 v% X, n
          emptyDir: {}
( L, J( X% s4 v      serviceAccountName: kubernetes-dashboard
+ ~. K( Q" E: Q7 f- h1 a. {) B      nodeSelector:6 i1 ]7 R# Q4 M# W  [1 M0 ~
        "kubernetes.io/os": linux- Y2 i2 ]% z; w; L8 Q, w
      # Comment the following tolerations if Dashboard must not be deployed on master1 i; ?2 q8 Q! x) s# S3 w7 ^: }' t
      tolerations:# Q+ T2 B1 g% R9 d. X% j
        - key: node-role.kubernetes.io/master/ b& s: {; b. K9 r' b7 v) V5 s
          effect: NoSchedule
$ C7 Z1 ^  I" F% B; }( D' x# X+ G" y$ M& [' Y7 ]9 p- |3 P, N) t
---
+ J5 e' a6 C" v: d* rkind: Service9 i8 j; f) I1 g- i& A
apiVersion: v1
1 a6 _% @1 R6 f2 Wmetadata:3 ?5 U3 l9 U& n0 V
  labels:
* i, s  \4 c# c% V- U: T    k8s-app: dashboard-metrics-scraper% k; `  i2 n) i7 \" w
  name: dashboard-metrics-scraper
& u" g: w+ ?2 |+ ^. ?! t  namespace: kube-system
& n: P4 b: }; U& _spec:
" R/ k8 \; R9 M4 _  ports:' j; r) f8 g0 f1 n: H4 o
    - port: 8000$ @& F! R, W- r* p# [0 |( Z: x# \) F
      targetPort: 8000
& B, [( I  P2 ^! h' _& {) _  selector:
4 u! U3 g; M+ m0 R, U    k8s-app: dashboard-metrics-scraper6 B! {, R+ Y0 h4 A1 a
% d! U+ ]9 p! y5 X& T
---
& M0 L5 i4 {2 k6 `1 J2 Hkind: Deployment
- ]0 Q/ d6 ]! j& E2 capiVersion: apps/v1
+ i, C1 S/ a, c1 f: s! umetadata:
4 P! l/ M$ N" T& V5 H& I. l. N  labels:+ I/ L% u* m9 [0 V# M6 y
    k8s-app: dashboard-metrics-scraper
2 w6 D$ `' m  i0 ~2 S  name: dashboard-metrics-scraper4 Q1 O$ C( m$ ^
  namespace: kube-system
8 a4 v/ ~- T3 z# z7 Q! X' X# nspec:$ l' ]5 m9 z7 V( g; _, F
  replicas: 1
0 J" ?* x& |, @: T9 C+ E  revisionHistoryLimit: 10' o$ E/ Y2 X- k1 V' E% s
  selector:6 i9 I+ X& ]" y
    matchLabels:# A( W: c* E# c6 [
      k8s-app: dashboard-metrics-scraper
: _$ s! o. S: g/ o) P% n  template:
4 T* E1 q. e: i' g$ R% R, B    metadata:3 d% f' M- T- B, m6 _, T
      labels:% _' q  ?5 C  o2 r5 C5 O
        k8s-app: dashboard-metrics-scraper
4 a& u' z7 z0 f; O- k    spec:
, _' s$ s/ d6 Y9 Q0 }( W0 I3 K- |/ H      securityContext:* X4 n* J# L5 ^" K$ }
        seccompProfile:
0 t2 }! i6 D- h9 n2 ]: x( ]1 S          type: RuntimeDefault
; [: E: V' T' t; }: Q1 T% s      containers:9 K3 {( C9 w4 O; ^. ~
        - name: dashboard-metrics-scraper# d3 [5 x( D: M( J! {$ c
          image: kubernetesui/metrics-scraper:v1.0.7
/ R1 t+ ]9 G8 o7 r          imagePullPolicy: IfNotPresent
* p" y; h& _1 q$ ~6 z          ports:* D& [( g3 ~, l1 v$ i! }
            - containerPort: 8000  O( t: f2 t- R, I8 F4 g
              protocol: TCP
3 t" S4 R3 }4 M# q- \6 t! `: O7 K          livenessProbe:- b4 G# d7 T  g% r% W
            httpGet:6 T/ T" m' C8 X1 @9 G/ @) i
              scheme: HTTP
* _8 h8 a# i' Z" h4 V              path: /
; x7 h4 l$ e5 L- `              port: 80006 I7 H8 D1 n# u% o3 R
            initialDelaySeconds: 30
/ |- d6 |9 S: ?0 z            timeoutSeconds: 30
* G' a8 w5 W& S2 r$ ^! V* b  {          volumeMounts:
) A3 T9 @2 g  K( |* t" l          - mountPath: /tmp- j0 h  |$ Y  F7 C- t! \
            name: tmp-volume
5 t9 z# p' i0 ~& R6 n* k          securityContext:
5 B' {) n7 i) _4 t7 V& ?% d            allowPrivilegeEscalation: false
+ o( G: z7 f( T* k9 s$ ]$ I            readOnlyRootFilesystem: true
2 ^3 t/ f7 v& K6 S            runAsUser: 1001. w7 E2 t3 b4 g& _$ @/ J; Z
            runAsGroup: 20016 M4 r& \# n; \$ O; i6 w9 d
      serviceAccountName: kubernetes-dashboard
: e5 a( C# X. a5 w      nodeSelector:
( O5 X" x$ Q& x8 T4 V        "kubernetes.io/os": linux1 Z  ?% Z2 v' T7 G
      # Comment the following tolerations if Dashboard must not be deployed on master2 j$ A" n7 w% \  l; N3 O
      tolerations:
2 O5 U3 F; p! F& B7 F: ~  P        - key: node-role.kubernetes.io/master
$ v) i" ^; |$ q" u( ]( G1 f          effect: NoSchedule
. n8 n. b: `. a( l% }      volumes:
7 h1 H1 G2 Y2 i        - name: tmp-volume( ~# \/ T# Q- \8 [7 B& E% |' v
          emptyDir: {}, u/ s3 T% J: \8 U7 E
导入 dashboard 镜像
: ^# P, L4 q/ D/ {$ w$ Hfor i in 192.168.91.19 192.168.91.20;do \
, s% w. \. B" U( ^- [6 F6 Pscp /approot1/k8s/images/dashboard-*.tar $i:/tmp/
- u7 m; s' U. ^: r, ]ssh $i "ctr -n=k8s.io image import /tmp/dashboard-v2.4.0.tar && rm -f /tmp/dashboard-v2.4.0.tar"; \
- r1 Z; b  o( r( i3 ~) `; Wssh $i "ctr -n=k8s.io image import /tmp/dashboard-metrics-scraper-v1.0.7.tar && rm -f /tmp/dashboard-metrics-scraper-v1.0.7.tar"; \
) q& w8 @1 h( ^: E6 _6 ~done
. _7 j+ b- W2 J2 V$ i& U, W查看镜像
$ P) s  B1 v  d1 Y
: l# z, ~+ e3 h( [for i in 192.168.91.19 192.168.91.20;do \2 [4 m8 c2 ]- C# R5 n
ssh $i "ctr -n=k8s.io image list | egrep 'dashboard|metrics-scraper'"; \: g8 B/ ^' R( R4 c1 N  j
done, A  [! F2 f# N5 m6 p- K% e* l- a1 C
在 k8s 中运行 dashboard 组件; y. O! L* r/ o- F/ N9 \5 {
kubectl apply -f /approot1/k8s/tmp/service/dashboard.yaml, U7 e. {+ W( e" m6 ~! C
检查 dashboard pod 是否运行成功' t5 q* T) g" }
kubectl get pod -n kube-system | grep dashboard
% L( r, K% D( U/ X6 L1 J预期输出类似如下结果- w* n# @" u5 R6 y

  k; c+ G) [4 W9 p( ndashboard-metrics-scraper-799d786dbf-v28pm   1/1     Running       0          2m55s" i" y; C* ~% j% h
kubernetes-dashboard-9f8c8b989-rhb7z         1/1     Running       0          2m55s
+ [5 d# Z4 y' f. W; B查看 dashboard 访问端口
& S6 `! y3 n0 r在 service 当中没有指定 dashboard 的访问端口,所以需要自己获取,也可以修改 yaml 文件指定访问端口" F1 B1 Z& Y' `/ \0 s( r0 I* L

" w* ^2 |3 H! v* X$ C+ N预期输出类似如下结果
$ {: U$ G' X; D' z
7 I/ l- L. x" ~" _4 Z* v0 R" U' h我这边是将 30210 端口映射给 pod 的 443 端口- F1 k) e5 @( u) ^( W
! K& U: x2 m# n$ f) L$ b  y
kubernetes-dashboard        NodePort    10.88.127.68    <none>        443:30210/TCP            5m30s
2 f4 |, Z# k- H$ G' U) c/ L1 Q根据得到的端口访问 dashboard 页面,例如: https://192.168.91.19:30210
9 D. t2 c( l, m# v- G& K7 y
+ P6 r, l# s4 K7 b9 k& l查看 dashboard 登录 token
" ^/ q7 m4 Z& t* T, P$ }获取 token 文件名称
- w! V' T& o- p6 Q# T, G3 B+ e8 F, b4 x8 d& \0 K/ ?8 o  i
kubectl get secrets -n kube-system | grep admin
0 C: w/ a, G4 t4 c预期输出类似如下结果
' k. p- o3 ]% a! E: d" I, u" b4 W
& m! x3 C/ h8 E6 o) Z- kadmin-user-token-zvrst                           kubernetes.io/service-account-token   3      9m2s
+ {, ]2 \. B4 H' r获取 token 内容
3 ?; I( y' n4 d- M1 D8 F. y- ?0 l6 O; X; m
kubectl get secrets -n kube-system admin-user-token-zvrst -o jsonpath={.data.token}|base64 -d
# w6 j2 e; a! v0 v9 a预期输出类似如下结果
: m- n: l. a( a8 E5 Z+ F
' Q$ x$ q# g2 n, S& oeyJhbGciOiJSUzI1NiIsImtpZCI6InA4M1lhZVgwNkJtekhUd3Vqdm9vTE1ma1JYQ1ZuZ3c3ZE1WZmJhUXR4bUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2cnN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYTE3NTg1ZC1hM2JiLTQ0YWYtOWNhZS0yNjQ5YzA0YThmZWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.K2o9p5St9tvIbXk7mCQCwsZQV11zICwN-JXhRv1hAnc9KFcAcDOiO4NxIeicvC2H9tHQBIJsREowVwY3yGWHj_MQa57EdBNWMrN1hJ5u-XzpzJ6JbQxns8ZBrCpIR8Fxt468rpTyMyqsO2UBo-oXQ0_ZXKss6X6jjxtGLCQFkz1ZfFTQW3n49L4ENzW40sSj4dnaX-PsmosVOpsKRHa8TPndusAT-58aujcqt31Z77C4M13X_vAdjyDLK9r5ZXwV2ryOdONwJye_VtXXrExBt9FWYtLGCQjKn41pwXqEfidT8cY6xbA7XgUVTr9miAmZ-jf1UeEw-nm8FOw9Bb5v6A
5 p$ {- n5 f1 q0 D) a  |6 i! Z4 i$ Z8 Y$ q$ b
到此,基于 containerd 二进制部署 k8s v1.23.3 就结束了
- ^2 R9 E' ~3 G" R) ^- i1 x
& w& O- m/ \& q+ d0 B
 楼主| 发表于 2025-1-1 20:38:40 | 显示全部楼层
生产环境关键性配置
- H8 P5 f  O5 Z8 N, i修改Docker配置
% E0 Y# n$ S( rDocker配置 采用containerd作为Runtime无需配置
( m4 D0 E3 ~: I8 xvim /etc/docker/daemon.json' O+ m; C* n6 v6 s& u4 G
{  "registry-mirrors": [6 _* e- k9 w) b$ Q
    "https://registry.docker-cn.com",
3 Z- l  B3 H' l4 B% K$ [6 g    "http://hub-mirror.c.163.com",6 K2 I- L. R- u
    "https://docker.mirrors.ustc.edu.cn"+ X5 u6 C+ T4 U+ ~; o
  ],9 U$ {- g  Q6 O, f; M
"exec-opts": ["native.cgroupdriver=systemd"],8 ?& t, }* @. Y; }. P' Q' S/ Z, f( v
"max-concurrent-downloads": 10,   # 并发下载的线程数
& c2 Y, t+ E$ }% m "max-concurrent-uploads": 5,   # 并发上传的线程数
* B) p' ?4 Y; V "log-opts": {6 C# |3 |& `& p6 n! n  j
   "max-size": "300m",   # 限制日志文件大小,到此大小进行分割3 S. [/ F) B7 O( G' c
   "max-file": "2"        # 限制保存的日志数量,按实际情况修改
, u3 p6 [5 u1 j# E" |) M },8 \# L$ z3 |4 y. H
"live-restore": true    # 重启docker进程不重启docker应用* {4 K8 y2 U1 m1 A  K  S* r
}
/ t5 u  b6 [' P* X修改证书有效期0 t) t- H' j( B( @0 V
通过Bootstrapping申请controller-manager颁发的证书,默认有效期为一年,在内部环境可以设置更长
7 N2 S7 E! E) X$ c( z& c0 t  O; x& ?4 g7 e0 ]* v
vim /usr/lib/systemd/system/kube-controller-manager.service# C9 y1 p+ O& {0 A8 D1 ^

3 ?% X, t" T% \, h& g# 设置证书有效期,因为证书最长的有效期应该是五年,设置再多可能也是五年,kubelet会在快过期的时候重新进行 申请
7 m2 h) a( _0 t1 L6 T--cluster-signing-duration=876000h0m0s \
( p. C+ `8 [) O  s
8 Q2 Z' C2 {3 Y4 m# 在自动申请证书的时候进行自动颁发一个,在新版本中已经默认为true,所以不需要进行配置
. T! |' W6 f7 D) z: h6 w# --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \
' k- u3 O% a8 E2 h修改kubelet配置文件6 k3 k% B5 h+ k9 t- u
vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
, H- k6 P7 x; h! G" I/ K" o: B) ^[Service]
* d3 d, I5 N$ n% T# x( L, jEnvironment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
" ?7 _3 Z1 [6 `' s; O. [Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd") g6 D# {' |* Y) V
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"" m; G" T1 t, v% e9 o
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m"/ {+ Z0 @+ R$ P
ExecStart=( J1 k; r) F! h( @
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS& Z) C1 y& @6 |. }# v3 u' b) M
如果公司内有安全团队会进行漏扫,k8s默认的加密方式比较简单,更改加密方式9 D  h7 V8 t( D+ t) _

8 e6 u7 e2 X* m9 C% P$ \) V8 e# z添加:--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
7 g2 l( i0 ~/ Z8 p8 k$ k; k  y1 r$ Z6 _* H) S  t- W  H( J
设置下载拉取镜像的时间,下载公网的镜像会比较慢,默认下载时间很短
" j# b: J, o! S( B* z! T( J4 z  L
5 U% E4 S, f$ p6 [& D% S2 s. L添加:--image-pull-progress-deadline=30m/ Q! _. {) W% U4 Z, X& P/ @; P

& S  W' V4 {; ]. L  R/ g[root@k8s-master01 ~]# systemctl daemon-reload
, K1 K6 D2 T9 X+ |/ ~& w7 L, ][root@k8s-master01 ~]# systemctl restart kubelet& K1 Z; j# A  r0 w0 B7 N
新版本的k8s配置文件都建议放在 /etc/kubernetes/kubelet-conf.yml ,慢慢的参数都会挪到这个配置文件中,包括上面的参数) D5 A7 |, h9 }
9 s) `6 x: _+ W8 Z
[root@k8s-master01 ~]# vim /etc/kubernetes/kubelet-conf.yml0 Z, s# s1 r" G) K
最后添加
/ Z, k6 B6 f8 l( t* b0 \. ]5 ]  OrotateServerCertificates: true
( }( G0 j7 m" I7 G: aallowedUnsafeSysctls:     # 默认不允许修改内核参数(并发量、文件打开数等)
7 u! S  r1 H2 v - "net.core*"            # 设置参数允许修改内核,可能涉及到安全问题,按需配置% [& n! E; g' G7 M3 g2 d& G
- "net.ipv4.*". z0 a5 \/ O: ?
kubeReserved:             # 给k8s组件预留资源
* \* ]5 ?3 p% M  cpu: "1"4 g1 l  z" v! E" j
  memory: 1Gi
2 U& D% g! F" k- G; A9 P7 o  ephemeral-storage: 10Gi1 V& p  [, F. g8 N( h7 m
systemReserved:           # 给k8s系统预留资源     
: {3 M" W4 N+ T7 t, Y5 r  cpu: "1"5 z  W$ i, u+ S! Y% Z
  memory: 1Gi
- a& w% \, K: D8 Q$ Z3 ~  ephemeral-storage: 10Gi( P7 m) S( I: |
* b6 {- k; n! Q" Y7 p8 W- d0 f$ ~% p
[root@k8s-master01 ~]# systemctl daemon-reload1 Q* G* ]# Z5 m
[root@k8s-master01 ~]# systemctl restart kubelet
. c4 D9 h7 j+ I( C$ i3 v( I修改主机ROLES、labels
4 E: a4 k, a+ o+ U# R- I6 x查看目前ROLES为none,修改k8s-mastre01的ROLES为master
+ G: W+ Z' Q5 `6 s0 \1 A4 k" j; F( k8 [/ T& W
因为k8s对于k8s中的节点属于哪个角色是没有感知的,master节点就比node节点多安装几个组件而已,对于角色ROLES的定义需要人为的区分8 `- z' f9 r) w  V& V/ F
) n1 o* |6 I5 }, N3 E0 u6 a# \
[root@k8s-master01 ~]# kubectl get node1 K+ s( U% c+ C1 Z
NAME           STATUS   ROLES    AGE   VERSION1 n, o0 D- Q$ c2 h
k8s-master01   Ready    <none>   19h   v1.23.8
. r3 d: x* D: x. ]$ ]' Mk8s-master02   Ready    <none>   19h   v1.23.8
3 g' y4 B. e5 a, V# uk8s-master03   Ready    <none>   19h   v1.23.89 i4 m4 G" ?1 P
k8s-node01     Ready    <none>   19h   v1.23.8
/ Y+ ^% h4 M5 ~6 b  f% c7 Dk8s-node02     Ready    <none>   19h   v1.23.8- z# ~# d9 b/ X; K" m. Y
k8s-node03     Ready    <none>   19h   v1.23.8
) h3 V/ T1 M0 R8 E- g[root@k8s-master01 ~]# kubectl get node --show-labels+ j& o- V1 B  B2 z' q
NAME           STATUS   ROLES    AGE   VERSION   LABELS7 Y: i# B; y6 \/ q
k8s-master01   Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node.kubernetes.io/node=
3 _$ B# ?% Y! D' ~  ~k8s-master02   Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux,node.kubernetes.io/node=
5 f' {# h0 C& Jk8s-master03   Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,node.kubernetes.io/node=
" Q; _5 l4 o% y% j( L/ l" Xk8s-node01     Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,node.kubernetes.io/node=
2 C/ k/ X) [0 C2 @4 O- N/ l& kk8s-node02     Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,node.kubernetes.io/node=/ U1 G5 J. P7 p( F: B. @9 |% u
k8s-node03     Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node03,kubernetes.io/os=linux,node.kubernetes.io/node=
$ S9 M: `% S1 L* J% d1 H[root@k8s-master01 ~]# kubectl label node k8s-master01 node-role.kubernetes.io/master=''
% J9 _% M% j3 T& v2 b% onode/k8s-master01 labeled
! V  T& e. n- V0 v[root@k8s-master01 ~]# kubectl get node
! p! W6 h& V/ z4 dNAME           STATUS   ROLES    AGE   VERSION5 c, h5 C% K! W$ }" {# f2 c: L( W
k8s-master01   Ready    master   19h   v1.23.86 x2 Z, {+ M# z6 y
k8s-master02   Ready    <none>   19h   v1.23.8/ {& f) K7 g  f; d% Y
k8s-master03   Ready    <none>   19h   v1.23.8( h- c  g. N2 h1 g
k8s-node01     Ready    <none>   19h   v1.23.8
6 z, ~3 Q4 o" G* Tk8s-node02     Ready    <none>   19h   v1.23.87 [( h% i( B$ t4 t
k8s-node03     Ready    <none>   19h   v1.23.8
" R5 O1 i4 R# d! n生产建议8 n- v1 y# H7 X7 k
1、生产环境一定要用二进制组件安装" W( p5 B# z9 b0 R. m/ x0 r
2、etcd一定要和系统盘分开,必须使用ssd硬盘4 y' F" g1 S8 \4 {2 L+ G+ v
3、Docker数据盘和系统盘分开,也尽量使用ssd硬盘' o* b  D) j; s( j7 f& o) x  [
- {; J3 O) J- U: W
 楼主| 发表于 2025-1-1 21:14:06 | 显示全部楼层
cd ~/kubernetes/manual-installation-v1.23.x/calico/更改calico的网段,主要需要将网段,改为自己的Pod网段 sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml grep "IPV4POOL_CIDR" calico.yaml -A 1            - name: CALICO_IPV4POOL_CIDR              value: "172.16.0.0/12" kubectl apply -f calico.yaml
* E8 t5 W! }. P2 a, D# G
1 O& j! V$ E( A4 j& p2 [* K8 Q8 r) H+ w% D, q" ], j2 y

" ^; D) W3 {9 y4 P9 d- P) r  D
5 d1 x0 j; H' c

* y  [- Y1 k! P( M2 c更改calico.yaml# Cluster type to identify the deployment type  - name: CLUSTER_TYPE  value: "k8s,bgp"# 下方熙增新增  - name: IP_AUTODETECTION_METHOD    value: "interface=ens192"    # ens192为本地网卡名字
  u1 X! p' Y; d! H, K
# l/ b. C  h% t+ N
+ F& _* f  n+ K! r6 i; N% I. a
更新下kubectl apply -f calico.yaml再进行查看& l7 N* a  M2 m1 f" Z; G$ w* o
& j" Z! x' Q) g" |6 `

; g% {1 t" O1 {  j; A7 f! J) y/ i/ b4 m1 h# U7 B* m5 ?

7 o3 U- L' r! @7 X# \0 W% \/ u9 ~+ T3 @$ F
! A0 e) L. K% ?
7 p8 R# q- ]9 s8 |* V
$ a( z& g, X7 o0 H. ^( M
安装CoreDNS安装官方推荐版本[root@k8s-master01 calico]# cd ~/kubernetes/manual-installation-v1.23.x/CoreDNS/如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP[root@k8s-master01 CoreDNS]# COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0[root@k8s-master01 CoreDNS]# sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" coredns.yaml安装coredns[root@k8s-master01 CoreDNS]# kubectl create -f coredns.yaml serviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.apps/coredns createdservice/kube-dns created% U3 y# W/ x& [" F% ^* }
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 点击这里给我发消息

GMT+8, 2026-3-9 00:31 , Processed in 0.133921 second(s), 25 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表