易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 690|回复: 10
收起左侧

kubernetes集群实施步骤k8s实施步骤

[复制链接]
发表于 2024-9-17 10:35:55 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
kubernetes集群实施步骤k8s实施步骤+ |/ I3 U, I9 p6 e; a
+ w) U7 ^8 `: v' y" l. f- k* f2 q

3 A* o7 }5 \+ ~1 F一:准备环境
& [* c3 M  o  h. W' R, e/ L4 x   服务器规划:/ T& j- x% b2 {3 s9 S( N
名称                ip地址) d! M, }, P) j$ K5 Z4 {8 y- ~" I
k8s-master       172.24.118.182
, ~% w5 f1 A2 z# B3 `( _k8s-node1        172.24.118.183
# }; f( G# P$ nk8s-node2        172.24.118.184
! c  U5 R; O. T& ^$ T, G4 T0 T9 @' T9 n0 D# x- H, v
   服务器要求:
' u6 R; M# X! g  A( Y' o) |   最小建议硬件配置:    4C  8G  50G
1 }! j" K! `, ^/ \   服务器可以访问互联网,可以联网下载docker镜像4 O9 K5 R0 S# J" R
3 W1 X5 O( s  p1 s
  软件环境:9 Y5 r: h2 c( j
       软件                               版本
# ?% I2 r) Y  @8 }' g    操作系统                           CentOS7.9_x86_646 y/ o$ ~$ k. {6 F7 c
     docker                              22及以上 (CE): Z1 }& S5 X0 K( q, p4 i
     kubernetes                         1.28
0 q  j8 z& `2 o6 S# D! u/ H% b* e# r1 g$ O2 ?6 x% k
$ K- J' ^; d8 K5 Z# O2 m: h$ s
二:初始化配置
% Z4 u  J$ L( O- W% I+ }* L; L- R' i3 P8 p
##关闭selinux
3 M1 {  r  L0 vsed -i 's/enforcing/disabled/'   /etc/selinux/config        #永久修改
( q$ a2 t! P3 I8 t7 d/ d# m1 e: o! l6 g. }: k4 z3 R
setenforce 0        #临时生效2 [% C) J* a6 g/ A, `, n. H
3 K) U& X: B0 \% P" W
关闭swap
' J# G0 n4 Q+ r9 J7 }! _  swapoff -a   #临时关闭swap分区: \7 q  I; M. }2 M
  sed -ri  's/.*swap.*/#&/'  /etc/fstab        ##永久关闭。    默认操作系统安装时不分配swap分区+ b# S# I" h9 i/ J5 N5 T

% B) {8 ^9 Z2 K2 z8 W2 t7 K8 M, w2 c* Q6 _; X3 g

: Z/ H  i( w/ h  o$ c7 P0 \根据规划设置主机名:
$ m, _, c% F+ I( U/ ~" T  主节点:
0 H$ O+ e0 D7 O) X$ Z  j! k3 n( l4 m# S, Y  \, ]6 g
[root@test111 ~]# hostnamectl  set-hostname kubernetes-master
4 C6 L9 I* H* m+ Y
/ J! n, o- I% o. Y8 m/ p& k, w  从节点:
2 }, u" K, r1 n" _& `6 j% C2 P    #hostnamectl set-hostname kubernetes-node1
9 q3 ~0 T! N+ G& c  G   #hostnamectl set-hostname kubernetes-node2
$ o  J( n0 H- R9 D1 y/ A. z6 Z, H0 K: Q/ D2 z1 V8 c- k
网络桥接数据包通过iptables处理,启用内核参数:
& Z- r! v+ n2 z% _7 `4 H% I$ ?1 E9 a4 t( e5 j0 u+ N' S

& R' j2 A6 m6 `; ?, U" bcat > /etc/sysctl.d/k8s.conf << EOF1 x' [" R/ L8 @
net.bridge.bridge-nf-call-ip6tables = 1' i9 @$ R" z9 N, r( k
net.bridge.bridge-nf-call-iptables = 1
' E1 j  m3 |( y; R, K  dEOF  g2 V$ e. F# [- C

* h8 {; r5 l9 M, m0 H2 `' I" m/ n8 G

( [5 e$ `- C8 ^% z% T5 q; o  或者使用sysctl.conf文件进行修改
6 L0 p  q3 u/ _1 f( N- ?sysctl -p   ###    sysctl.conf 文件生效/ ]9 d. l2 }2 C* ], ]
sysctl --system        #生效   所有的内核参数文件& s4 E+ o+ z9 E3 V1 D- i; s

+ w9 J- v; m' r* l4 {/ _1 Vsysctl --system /etc/sysctl.d/k8s.conf ' g7 S5 N' p, L$ x2 @
; m) Z: `7 Q$ F
三、配置时间同步服务:
' _$ w3 u- j. f   yum install -y chrony
) x; O* D7 @  U1 r3 j/ _  M5 I3 O- v% R3 Q
  此处略过; F2 X) i0 ?6 a* g; \5 d9 x
主节点:6 i) ]0 O2 N$ x% a, q
vim /etc/chrony.conf9 A1 h5 R( _  i
server xxxx  iburst4 U9 s! G% ~6 s
, _& M5 a# Q# y! @" G
allow  x.x.x.x/24& r6 b! R* c9 }: l3 W
+ S6 [4 ~$ P8 Q) Z
重启chronyd服务3 _' Q0 f# N% J3 X
即可
/ O/ F* ^! m$ b- P! X: w# l1 w$ R. r  h
node节点:
5 R, E6 m$ y' I& F; l6 gvim /etc/chrony.conf
$ c  h# @4 S( }1 s, t( U4 sserver xxxx  iburst5 Z: c! ^* o3 b( o) r. i- ~
: L# c: ~$ V% g$ ]" [& Y1 V
重启即可  J6 D7 b0 t4 P0 W8 k; e3 O

, N) _6 j+ E7 [( ?6 T/ \1 Y1 k确保时间同步即可
! K# A: Q! Z  h: Y- W- Z$ o1 O* \7 }6 J7 w$ D& d3 |% T$ ]1 z
% B/ Q6 I" ^2 Z, n6 i! I
四、配置host域名解析2 G2 y9 j7 \+ F# b! ?7 n
/ B+ B4 ?. U3 g( r- x9 e

# H, J5 u; ^# b6 U* o* I3 g172.24.110.182 kubernetes-master0 b" R! _5 U+ b) F- v
172.24.110.183 kubernetes-node11 L* R$ g( H3 `; Q, a
172.24.110.184 kubernetes-node26 {; H1 l2 ?0 Q
$ N9 J6 ]# p- Y7 a
五、安装docker
' @: H* z( N  g( B! z- `; a* i$ ~1 N( {" e* \1 L* P
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo. x4 j; H$ w. x! P6 F; F, |
--2024-09-17 14:44:40--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1 a. Q& T5 ]- D9 ]! \) m4 [$ O7 fResolving mirrors.aliyun.com (mirrors.aliyun.com)... 124.95.172.94, 124.95.172.91, 124.95.153.241, ...
6 ?* i  I2 }( ~- oConnecting to mirrors.aliyun.com (mirrors.aliyun.com)|124.95.172.94|:443... connected./ a1 I, f2 L. v# x$ e
HTTP request sent, awaiting response... 200 OK
$ ]" l8 {4 A/ [9 ]1 b1 U% r, sLength: 2081 (2.0K) [application/octet-stream]9 }& ?4 ~/ t* k( i8 s( V

7 }3 E3 a5 _% S8 ~& ^! {Saving to: ‘/etc/yum.repos.d/docker-ce.repo’
6 N% k- g2 ~7 |* s1 |3 P6 n
5 l* F, A) Y2 ]9 {' A100%[======================================================================================================================================================================================>] 2,081       --.-K/s   in 0s      
# q) d. E+ D5 r3 B* H% z( A; p
  J8 Z3 U. n+ E8 |, n, ?6 T2024-09-17 14:44:40 (122 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2081/2081]" a% g; I/ w0 V6 ~

) W5 M6 T, P7 N, ~: A7 i: J* Y% }/ g  h: N( c
* R% I3 v! Q7 t" \% v
2 @/ ^& \- O: n) B7 S* N% N# f
4 N& x8 l5 m9 y: a' f. V5 s
[root@kubernetes-master ~]# yum install -y docker-ce" X3 @# _/ V' `
Loaded plugins: fastestmirror, langpacks( l; [: F6 v$ l) F- J
Loading mirror speeds from cached hostfile
. k* ^# J5 a6 P5 U- Q, z" L6 s * base: mirrors.bfsu.edu.cn; a( x- w0 z& c
* extras: mirrors.tuna.tsinghua.edu.cn
2 I- U8 T( x0 V, q' T * updates: mirrors.tuna.tsinghua.edu.cn
; q4 f$ g# X! c0 ~/ Xdocker-ce-stable                                                                                                                                                                                         | 3.5 kB  00:00:00     - X9 u+ K3 ]4 K4 m, R
(1/2): docker-ce-stable/7/x86_64/updateinfo                                                                                                                                                              |   55 B  00:00:00     
% q; |& M+ ~" R2 k7 }$ ^: H: n) Q(2/2): docker-ce-stable/7/x86_64/primary_db                                                                                                                                                              | 152 kB  00:00:00     
/ x" j& I; o+ Q- ~- J$ TResolving Dependencies5 x) o0 I9 N0 E+ R( F  X
--> Running transaction check
. l5 A# x+ V6 H- A---> Package docker-ce.x86_64 3:26.1.4-1.el7 will be installed# g% J8 Q# R" ?8 F; c
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-26.1.4-1.el7.x86_64, u3 f# g7 Z# }
--> Processing Dependency: containerd.io >= 1.6.24 for package: 3:docker-ce-26.1.4-1.el7.x86_64
3 d7 K5 i2 |% u" z7 b; x5 c--> Processing Dependency: docker-ce-cli for package: 3:docker-ce-26.1.4-1.el7.x86_64
5 r* }5 O4 j  }( ?' O3 U8 H+ S& U, {--> Processing Dependency: docker-ce-rootless-extras for package: 3:docker-ce-26.1.4-1.el7.x86_64
. z2 j% N  K: e. m--> Running transaction check& [( I1 a. }9 S( t
---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed
3 Z* F/ v9 T  E0 T+ q3 D---> Package containerd.io.x86_64 0:1.6.33-3.1.el7 will be installed
7 U) I' M& o2 L8 W* W---> Package docker-ce-cli.x86_64 1:26.1.4-1.el7 will be installed
" T* {: w' A0 d--> Processing Dependency: docker-buildx-plugin for package: 1:docker-ce-cli-26.1.4-1.el7.x86_64/ E: I/ M% A/ [. E
--> Processing Dependency: docker-compose-plugin for package: 1:docker-ce-cli-26.1.4-1.el7.x86_64
! R8 w+ B2 V5 G---> Package docker-ce-rootless-extras.x86_64 0:26.1.4-1.el7 will be installed
4 d! i6 g. u8 T3 C; s  S--> Processing Dependency: fuse-overlayfs >= 0.7 for package: docker-ce-rootless-extras-26.1.4-1.el7.x86_64
; d2 F6 p' }6 [' B! b; D# v1 S9 j--> Processing Dependency: slirp4netns >= 0.4 for package: docker-ce-rootless-extras-26.1.4-1.el7.x86_64- [( q% `# Q$ T4 ~6 f# \, T9 _. W
--> Running transaction check
9 t( v% t6 l# {) d8 q  r, |/ y& `---> Package docker-buildx-plugin.x86_64 0:0.14.1-1.el7 will be installed
) }) \* G: x. c/ \( H7 `+ L---> Package docker-compose-plugin.x86_64 0:2.27.1-1.el7 will be installed1 g5 C8 e' E* d( Z9 {" U
---> Package fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 will be installed
3 T) D1 s" `' l* ?( B* E--> Processing Dependency: libfuse3.so.3(FUSE_3.2)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64* T) Z& @5 N7 a5 |
--> Processing Dependency: libfuse3.so.3(FUSE_3.0)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_644 J6 [! a8 }0 I6 |! x3 V4 ^
--> Processing Dependency: libfuse3.so.3()(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
+ I. I1 ^; g' M  ?' a---> Package slirp4netns.x86_64 0:0.4.3-4.el7_8 will be installed
! J, d2 Y6 O( g( I' \9 W--> Running transaction check
# A1 r4 E; [/ F% h- A( d3 C, H---> Package fuse3-libs.x86_64 0:3.6.1-4.el7 will be installed
; x" |$ O& S3 T3 o--> Finished Dependency Resolution
/ i' W! D2 X5 i) F0 m! o) [" n! m2 l' U/ f) }: j4 |
Dependencies Resolved
! c2 `$ K) U* m" L0 t; S! G
! N4 I% `, M+ n1 H/ Z================================================================================================================================================================================================================================) E9 q( g8 N! R- U9 e
Package                                                      Arch                                      Version                                                       Repository                                           Size; R! ^6 w# z4 k% ^0 M$ T9 ?, Q# y
================================================================================================================================================================================================================================0 _/ K" H* D' d& V
Installing:
3 u- N7 S5 P  ?$ ]& p/ Y$ s docker-ce                                                    x86_64                                    3:26.1.4-1.el7                                                docker-ce-stable                                     27 M
2 g# B, ]/ x. ~/ f) ~) O! NInstalling for dependencies:( [  g' `& F0 S% p% t
container-selinux                                            noarch                                    2:2.119.2-1.911c772.el7_8                                     extras                                               40 k
5 v! N9 o4 M1 p2 d) s' d% o0 P$ D containerd.io                                                x86_64                                    1.6.33-3.1.el7                                                docker-ce-stable                                     35 M. Z4 ]$ C3 @: p1 u! f# \; T' S1 z
docker-buildx-plugin                                         x86_64                                    0.14.1-1.el7                                                  docker-ce-stable                                     14 M' z: U+ V  ]3 Z% F. k; ]/ d+ V- n
docker-ce-cli                                                x86_64                                    1:26.1.4-1.el7                                                docker-ce-stable                                     15 M, i. h3 X3 H& k1 o
docker-ce-rootless-extras                                    x86_64                                    26.1.4-1.el7                                                  docker-ce-stable                                    9.4 M
& w& T- w9 e& U6 J docker-compose-plugin                                        x86_64                                    2.27.1-1.el7                                                  docker-ce-stable                                     13 M
" _  [$ }5 M  o7 u& R- g, h6 ` fuse-overlayfs                                               x86_64                                    0.7.2-6.el7_8                                                 extras                                               54 k
8 L) t  m. v, n# A: N fuse3-libs                                                   x86_64                                    3.6.1-4.el7                                                   extras                                               82 k! I  T3 \& o% a. o% c. }) g
slirp4netns                                                  x86_64                                    0.4.3-4.el7_8                                                 extras                                               81 k
) x7 {( I0 p  j) O/ n  C
4 [( Z4 @$ G: w7 ^: d4 Z7 \6 n% qTransaction Summary/ T0 D4 n. M3 h, f" V
================================================================================================================================================================================================================================
% x. ?0 N/ P- L: w6 e5 a! sInstall  1 Package (+9 Dependent packages)* L3 s8 R8 ^! j' R/ @
. U8 f# G# y) H. e& Q
Total download size: 114 M
+ p6 Y  e, Y, z* u: D6 z6 XInstalled size: 401 M
/ E+ u6 L: T: f' _2 CDownloading packages:% y) I6 ~( x' w5 o5 r
(1/10): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm                                                                                                                                             |  40 kB  00:00:00     3 d7 D3 y8 y+ \4 s5 R* H, o
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/docker-buildx-plugin-0.14.1-1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY                               ] 3.1 MB/s |  26 MB  00:00:28 ETA & |- |4 D" c7 s# X' B9 }
Public key for docker-buildx-plugin-0.14.1-1.el7.x86_64.rpm is not installed
+ q! B% W! V# j" L+ ?1 f& z(2/10): docker-buildx-plugin-0.14.1-1.el7.x86_64.rpm                                                                                                                                                     |  14 MB  00:00:07     9 u# P, g6 J+ m+ C! x: Z8 j1 a
(3/10): containerd.io-1.6.33-3.1.el7.x86_64.rpm                                                                                                                                                          |  35 MB  00:00:19     : [. \: t- ?5 C- x; p% s' D/ C/ l
(4/10): docker-ce-26.1.4-1.el7.x86_64.rpm                                                                                                                                                                |  27 MB  00:00:14     
4 S" k7 }3 }+ t" O' ~(5/10): docker-ce-cli-26.1.4-1.el7.x86_64.rpm                                                                                                                                                            |  15 MB  00:00:07       t! c; v6 m/ R# w9 T4 q
(6/10): docker-ce-rootless-extras-26.1.4-1.el7.x86_64.rpm                                                                                                                                                | 9.4 MB  00:00:04     # O0 S+ {/ r7 E1 A0 o* G& q; S
(7/10): fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm                                                                                                                                                          |  54 kB  00:00:00     1 c0 E  F; W$ i4 w0 Y. d- u
(8/10): fuse3-libs-3.6.1-4.el7.x86_64.rpm                                                                                                                                                                |  82 kB  00:00:00     : S# e1 M/ h5 f" r
(9/10): slirp4netns-0.4.3-4.el7_8.x86_64.rpm                                                                                                                                                             |  81 kB  00:00:00     " u& T) `- R1 ~0 D6 D2 g
(10/10): docker-compose-plugin-2.27.1-1.el7.x86_64.rpm                                                                                                                                                   |  13 MB  00:00:03     
; Z5 w3 M  u' D9 L/ M6 T--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------$ v4 J& [7 e- l3 D+ ^
Total                                                                                                                                                                                           3.9 MB/s | 114 MB  00:00:29     ) ~- A3 z* l) f' k* d
Retrieving key from https://mirrors.aliyun.com/docker-ce/linux/centos/gpg4 k6 Y5 x/ r" `6 ~
Importing GPG key 0x621E9F35:0 b" ^5 Z  W% b# o; w0 {
Userid     : "Docker Release (CE rpm) <docker@docker.com>"5 i* O! Q/ R' Y& u8 s" W
Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35' E$ f' ]  W: v$ t( z$ z' D- O
From       : https://mirrors.aliyun.com/docker-ce/linux/centos/gpg- d6 Y3 z. o$ d% n  H
Running transaction check' q6 u1 Y, l8 w4 W/ X
Running transaction test
: T% v2 h* M0 @Transaction test succeeded. O. }( {9 _- A6 M) y
Running transaction
7 t8 e( Q, m% F+ |  Installing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                                                                                                          1/10 ! t2 p" X, W% b  K$ o8 u
setsebool:  SELinux is disabled.
" Z, [1 K7 C9 C  Installing : containerd.io-1.6.33-3.1.el7.x86_64                                                                                                                                                                         2/10 : T) H" \$ n; t+ J$ q! N4 m4 P# ?
  Installing : docker-buildx-plugin-0.14.1-1.el7.x86_64                                                                                                                                                                    3/10 , x2 k5 U+ w7 G% L# t1 F( h
  Installing : slirp4netns-0.4.3-4.el7_8.x86_64                                                                                                                                                                            4/10
, R3 `: Y# C% q  Installing : fuse3-libs-3.6.1-4.el7.x86_64                                                                                                                                                                               5/10
* j# v( @+ t' P  Installing : fuse-overlayfs-0.7.2-6.el7_8.x86_64                                                                                                                                                                         6/10 # s: S9 h& k3 \8 ^( [6 @
  Installing : docker-compose-plugin-2.27.1-1.el7.x86_64                                                                                                                                                                   7/10 % d* G- \& |% ]" ~: P
  Installing : 1:docker-ce-cli-26.1.4-1.el7.x86_64                                                                                                                                                                         8/10 ' f- f( |1 T% Q% g
  Installing : docker-ce-rootless-extras-26.1.4-1.el7.x86_64                                                                                                                                                               9/10
0 \% Y( r8 w; ?% U  v" Q) Y  Installing : 3:docker-ce-26.1.4-1.el7.x86_64                                                                                                                                                                            10/10
  j1 l, o. i" z, C, A$ Z' j  Verifying  : docker-compose-plugin-2.27.1-1.el7.x86_64                                                                                                                                                                   1/10
* D% ]8 I0 |- Y, C4 h5 C: s  Verifying  : fuse3-libs-3.6.1-4.el7.x86_64                                                                                                                                                                               2/10 $ ]: G4 C2 X# a" B* U
  Verifying  : fuse-overlayfs-0.7.2-6.el7_8.x86_64                                                                                                                                                                         3/10 5 |- s% o! w+ {! J4 }0 a
  Verifying  : slirp4netns-0.4.3-4.el7_8.x86_64                                                                                                                                                                            4/10
5 g; \4 u# s9 K  Verifying  : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                                                                                                          5/10
5 P' r) J- }$ }) R0 t  ^5 p, i  Verifying  : containerd.io-1.6.33-3.1.el7.x86_64                                                                                                                                                                         6/10
, e- D  f. Q3 X, |  Verifying  : 3:docker-ce-26.1.4-1.el7.x86_64                                                                                                                                                                             7/10
1 M; S' H" g6 m  Verifying  : 1:docker-ce-cli-26.1.4-1.el7.x86_64                                                                                                                                                                         8/10
& v# D  r: w- a; d  Verifying  : docker-ce-rootless-extras-26.1.4-1.el7.x86_64                                                                                                                                                               9/10 . q% b+ E# d9 ^. T  Q$ |& J! D; Y
  Verifying  : docker-buildx-plugin-0.14.1-1.el7.x86_64                                                                                                                                                                   10/10
! o& Y" P$ j' R& Y! l8 U1 s) O# g8 M) h
Installed:% i2 L5 t( R! ]
  docker-ce.x86_64 3:26.1.4-1.el7                                                                                                                                                                                               
2 i& P; {5 d- ?3 ^  I8 H2 `' @. Y/ b+ M
Dependency Installed:9 J2 V  Y( Q7 ]. l4 d: |9 g
  container-selinux.noarch 2:2.119.2-1.911c772.el7_8  containerd.io.x86_64 0:1.6.33-3.1.el7  docker-buildx-plugin.x86_64 0:0.14.1-1.el7  docker-ce-cli.x86_64 1:26.1.4-1.el7  docker-ce-rootless-extras.x86_64 0:26.1.4-1.el7
0 v& g* H( @% ?  docker-compose-plugin.x86_64 0:2.27.1-1.el7         fuse-overlayfs.x86_64 0:0.7.2-6.el7_8  fuse3-libs.x86_64 0:3.6.1-4.el7             slirp4netns.x86_64 0:0.4.3-4.el7_8  7 K- a; F# @: i$ L5 ]

# Q* i  X$ M! u$ V/ S( |. yComplete!0 ~: x5 s. a. E- H/ U
[root@kubernetes-master ~]# systemctl enable docker.service ;systemctl start docker.service8 T) A/ R& G6 X( U6 h
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
; ?) O; s  B; w, R7 l/ {[root@kubernetes-master ~]# cat > /etc/docker/daemon.json  <<EOF7 e1 c, n6 t2 m
{/ e' v5 }, Z  l8 B  d, V5 h
   "registry-mirrors": ["https://q9n10oke.mirror.aliyuncs.com","https://registry.docker-cn.com","http://hub-mirror.c.163.com","https://docker.m.daocloud.io"],
/ ~8 @1 E% y# M- q, a   "insecure-registries": ["8.141.94.237:5000"]  u( ]  ?; K: g0 J
}2 z1 v8 ]3 B" u4 V. B$ S' q
EOF
$ I" t; {9 x3 Z" y  ?( P3 H[root@kubernetes-master ~]# vim /etc/docker/daemon.json
' ~* V9 B* `* K, W( H[root@kubernetes-master ~]# systemctl restart docker.s
; t5 R9 c3 x& F8 U# Hdocker.service  docker.socket   6 T' _% p9 ]  m9 N
[root@kubernetes-master ~]# systemctl restart docker.service
# ?& ]6 I; g: t  s" J, U' K5 P# [[root@kubernetes-master ~]# docker info % g( N; M7 U* l& R9 z
Client: Docker Engine - Community2 I4 e- Q, N6 X* D0 q0 I' m
Version:    26.1.4
  n) b( ^6 y7 J: L Context:    default
% p; y5 `; @* h! t. r Debug Mode: false3 Q1 G# z/ R. F! d$ O
Plugins:, K! c1 a0 c" M% f
  buildx: Docker Buildx (Docker Inc.)8 n6 K1 \' f; O5 E; |( I0 k
    Version:  v0.14.1. b# s7 k7 n- q3 m
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
: F) r, p  N  r( l+ S" t, A% _; H9 Z  compose: Docker Compose (Docker Inc.)/ T4 u, K, o$ t% x1 n4 n6 V2 Q
    Version:  v2.27.1" u% |5 G! ]1 f! k8 b9 u* t
    Path:     /usr/libexec/docker/cli-plugins/docker-compose
# F+ Y9 F( F' I
9 ]* x2 V5 v* }" S7 lServer:/ c$ G, }; w8 E
Containers: 0: Q  q5 X4 Y2 j" a
  Running: 00 A. F1 Y4 u0 {( g
  Paused: 0
8 X, t( @- @& D6 X  Stopped: 0
% M# |3 J9 J2 A& O Images: 0" ], g" o3 ^# y, E8 ]0 f
Server Version: 26.1.44 S& x+ b, V9 i; K2 y% J4 P
Storage Driver: overlay2
# `, o" x3 z+ @# M% H6 f5 I  Backing Filesystem: xfs- n3 ^1 U& H- |
  Supports d_type: true2 O; z3 b) `2 V; o, h( n5 Q
  Using metacopy: false
. y* t8 Z+ k6 U. w) u  Native Overlay Diff: true
1 w0 f" X! ?* P' ]9 M+ u3 E, L; A  userxattr: false6 n( K2 f; m$ W: A% d* X$ Z
Logging Driver: json-file1 v$ W- Q+ A  q* o9 g% i+ q7 T
Cgroup Driver: cgroupfs
: `+ c; N" Y! x8 W/ D" @ Cgroup Version: 1
4 [: U3 `( O" y+ s* G! w Plugins:$ S/ n+ n' t* u# ]
  Volume: local3 e6 t6 @$ K) ~! G5 ?& z/ V1 o+ M0 T
  Network: bridge host ipvlan macvlan null overlay( C$ \) u$ ~# [  v
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog8 i5 }+ d8 }) `
Swarm: inactive
/ J! y+ j; b6 h8 x% D- `) I& B Runtimes: io.containerd.runc.v2 runc
% k  U  ~9 y; w  O5 @' |% C6 F4 | Default Runtime: runc
. R9 D* B$ x$ |7 k: T7 h Init Binary: docker-init& a! y! q+ u7 d5 h" B, H
containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957- R& f, }  ]; W4 b$ q: J
runc version: v1.1.12-0-g51d5e94
3 Z8 R- v8 M( V- O) _: ~ init version: de40ad0' @: Z; W# G  o- T! k
Security Options:
$ ?5 f; n: R' A8 \: t+ @  seccomp& X. \; G5 u' A7 ^. f4 K
   Profile: builtin
' g# \; T0 d, l  z( `1 L- s' r Kernel Version: 3.10.0-1160.24.1.el7.x86_64) w. ]- {- B+ Q: Q; [% `; }" K0 i
Operating System: CentOS Linux 7 (Core)" C4 [3 R1 Y6 l5 n) [  {0 v8 }
OSType: linux
! |. d/ W3 l7 w2 r1 N& G6 V& T& x Architecture: x86_64! b) r5 \2 p. r, [5 r# Q
CPUs: 43 E1 m% Z" L" K% V1 E6 F1 q
Total Memory: 3.7GiB5 q- t7 c& B9 X3 R: q* V" ]
Name: kubernetes-master
( Z+ g! c; x/ n2 q/ N6 X ID: 7a997224-186c-4ccb-a45b-e0f1ed3e65e3
; U! }$ N2 p$ _0 W. h Docker Root Dir: /var/lib/docker
5 b) ]! |+ E1 e! Y+ s Debug Mode: false2 W; }1 Z: a8 a9 n. h0 ~
Experimental: false
/ a5 E5 V" z" k& H$ O2 v; Z$ w( O Insecure Registries:
; p3 `) F2 h& S" _$ g/ ?$ |9 H  8.141.94.237:50007 H3 v4 n6 t8 g9 W" G4 Y
  127.0.0.0/84 [% B, Y0 V& t1 a* b+ X* w6 ^
Registry Mirrors:, i2 r! G  o$ c, Z1 |6 i
  https://q9n10oke.mirror.aliyuncs.com/5 ?0 [$ ~( }( k+ U; C6 u" M
  https://registry.docker-cn.com/+ F3 }! X2 f* S" m6 O% w  ^
  http://hub-mirror.c.163.com/
: z, E. m- o6 w1 v6 w* @  https://docker.m.daocloud.io/& @5 w0 I) Y7 t( q, c
Live Restore Enabled: false
( p+ N8 B$ \1 U0 @. Y: A: D+ E! I! t4 k" P2 o/ h. e
4 e7 K7 X* p0 R4 I. N* J3 x. {$ T

& e. ^1 _* x3 W" R; W/ W! N7 Qnode节点也同样方式安装,步骤略。5 b! w! _  ~9 P% N

6 S2 s$ ]5 b' R0 D) I
( \+ u/ w- X: b7 }# D: j' C六、安装cri-dockerd (Docker与kubernetes通信的中间程序)所有节点都安装:
5 j" w3 A1 ^- G
# z9 B! y1 a* @+ p
2 |3 r- Y0 s' m1 |& F5 k( ^1 d  Q, H+ @& K8 k9 v

& ]; U: b# c0 i- q1 W# wget https://github.com/Mirantis/cri- ... .2-3.el7.x86_64.rpm
6 O+ m0 }& K5 \4 ~  f+ U3 n--2024-09-17 15:04:04--  https://github.com/Mirantis/cri- ... .2-3.el7.x86_64.rpm
: k. C2 D) t4 CResolving github.com (github.com)... 20.205.243.166
: \& N8 \* j& T) b6 G$ yConnecting to github.com (github.com)|20.205.243.166|:443... connected.
% n( C' n2 C1 d( y% P7 m8 L2 WHTTP request sent, awaiting response... 302 Found4 d/ h3 ^8 m2 g
Location: https://objects.githubuserconten ... tion%2Foctet-stream [following]
  N/ d" X8 g$ q6 N' p--2024-09-17 15:04:05--  https://objects.githubuserconten ... tion%2Foctet-stream4 y* _) d, @( `5 i
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ...
% _. G4 R% Y3 _! z; Z6 SConnecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
: N' Y$ B) s% FHTTP request sent, awaiting response... 200 OK
) Y% Q$ B/ M) m2 N* t$ U; T" v) qLength: 9642368 (9.2M) [application/octet-stream]
. s" I1 K& M5 q5 u# ]  |0 h, E" XSaving to: ‘cri-dockerd-0.3.2-3.el7.x86_64.rpm’9 Y! p. f, E1 {' `1 N; X8 b0 l0 I

' a0 {; _$ h; E( U6 [  @& s100%[======================================================================================================================================================================================>] 9,642,368   7.33MB/s   in 1.3s   1 l5 P2 {& ^" Y
1 a( W# a) U+ p( {, p! G6 B7 d
2024-09-17 15:04:07 (7.33 MB/s) - ‘cri-dockerd-0.3.2-3.el7.x86_64.rpm’ saved [9642368/9642368]
# q/ U# q* A) H, C* `3 l/ c8 R' F+ ^$ U' d
" u# t! t6 O  d1 }0 W) j
0 I! k! d8 L0 z; x& D) [: ^( Q+ t* W
安装:
8 w1 r8 J% U8 b, l4 \7 Q1 B( r0 h
) P& f6 G7 q9 |0 u+ z) y8 urpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm % Z" L) m0 Q6 B  y
Preparing...                          ################################# [100%]
2 l" Z5 S% f! y& g' uUpdating / installing...0 R  d! C1 X* x; [- U
   1:cri-dockerd-3:0.3.2-3.el7        ################################# [100%]
6 r3 G$ I/ f9 @! a! @% k0 c8 l. d
2 F$ D- M" h# q2 Z  E: I, a- J; I, k- e' ?
配置参数:
: T6 g, p- r5 R3 T指定docker依赖镜像地址为国内镜像地址:  s) B$ v7 w/ m5 F% g

% Z+ @% k: v7 D8 v* Rvim /usr/lib/systemd/system/cri-docker.service
" [! Z) [1 z" G0 x5 T
' g& x. \2 {  d0 aExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
' Y+ j+ k3 ]# I% C2 s4 q+ C2 k) y5 K0 B; f

/ ?4 `; c* `3 u  a! }
1 C8 ]+ Q) b& P, n) R9 ?& R1 E系统默认systemd重新加载:1 w/ M2 e2 |' B
$ P, y" O' L6 D# ~, {' u1 V
systemctl daemon-reload
% G( G0 ?9 \8 b1 ^, c+ t
& d6 ^# U; z1 l# J( Q6 `添加开机启动,并启动cri-docker 服务:. U; ^  J- F2 Q* I/ K

& P- D& J! Y8 C7 B3 j& `systemctl enable cri-docker.service && systemctl start cri-docker.service- i" m4 F' l! w2 Z; n, g
Created symlink from /etc/systemd/system/multi-user.target.wants/cri-docker.service to /usr/lib/systemd/system/cri-docker.service." e$ V. {4 c4 [) g1 D, T# L
0 J, q) y1 m  B8 O6 `
: [# Z# W3 a: ~/ g/ R$ L6 {6 R5 e
七、部署kubernetes! U" y6 ]3 F7 W. [1 K

' h1 {" k4 R4 o* u. O1 ^" f) _% g
: `3 q5 ?+ K/ k/ K. z. V, J1 {% w! b% S0 }# Z. J& B  S
6 y; c. [. Y8 T2 d4 }+ u" o) R
kubeadm  M) R) g" p2 k" y
6 o* }5 P! U9 j- G8 z# Y" K! |' v* y
kubeadm 是官方社区推出的一个用于快速部署kubernetes 集群的工具,这个工具通过kubeadm init和kubeadm join两条指令快速搭建kubernetes集群。6 ]: d8 [0 t% \. D2 U: y, H) u

6 i) w- n9 g% H$ z3 Ykubectl# h9 m6 _( f& |0 F, q1 D: ?
) t% z2 J3 e# F" j: G- u  |
kubectl 是kubernetes集群的命令行管理工具,除此之外还可以通过kubernetes-dashboard管理kubernetes集群。
" M2 a' C6 [0 z+ K
8 q% w  G6 D0 X2 I" Q9 K+ V2 Wkubelet5 Z, _" B7 e& `: q6 b

  W: c# A& |. a7 D( h' sKubelet 是 Master 节点安插在 Node 节点上的“眼线”,kubernetes通过kubelet来管理worker节点;在 Kubernetes 集群中,在每个 Node 上都会启动一个 kubelet 服务进程。该进程用于处理 Master 下发到本节点的任务,定期向 Master 汇报节点资源的使用情况,管理 Pod 及 Pod 中的容器。: _' M1 u; A' ^  P

2 L" W6 w, a6 H
/ P* j$ P1 m4 R. ^1 R
! K0 M: }( j( U) m配置yum源:
9 o1 ^, P6 s! }( p* s$ ^$ n( N) r# [, M2 ~* r, }" n! _4 m4 o
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
" T" \% X: K- B" H* @ [kubernetes]8 i) e' T  F" G0 m+ X5 M5 A
name=Kubernetes
: U  n) F5 g6 h1 w: |2 h. y7 L baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/8 |; I0 M) P7 u/ u' G- I' l
enabled=12 ^7 M: P) X& A% K# m3 E
gpgcheck=0
. _7 E, j* m- c2 G+ N& N  O9 u" U EOF- C! n1 E) \! i$ C( R- n
! ?. u& N$ d; X- |" k

2 B, t' l+ _( W- F5 m1 m1 S! v* j0 a! R安装kubeadm、kubelet和kubectl   (所有节点): N& G0 T$ F0 p' R, ~# {) B
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
, H  U5 {9 F, G" K% ?0 q# H
! u/ B4 M/ Z7 S4 s7 P: X7 x7 i7 t) s
Loaded plugins: fastestmirror, langpacks: c" U/ Z2 g  H
Loading mirror speeds from cached hostfile
! ]4 k- ?1 I& b4 U& {  s# x8 K% Q * base: mirrors.huaweicloud.com+ D' d2 ?3 p& |
* extras: mirrors.tuna.tsinghua.edu.cn
; G6 _% S$ |) R. |: O, M * updates: mirrors.tuna.tsinghua.edu.cn# u2 z$ @& U4 a8 ^0 F
Resolving Dependencies3 \) ^  a2 U- g0 O% d
--> Running transaction check+ ]* s4 |* P! o0 ~6 r. |
---> Package kubeadm.x86_64 0:1.28.0-0 will be updated! g. Y/ ^# \+ d. f  i' Q
---> Package kubeadm.x86_64 0:1.28.2-0 will be an update4 h  i. ^) m' N, D( C: R
---> Package kubectl.x86_64 0:1.28.0-0 will be updated
) b+ Y) R# r1 @  g; ]  ?1 \---> Package kubectl.x86_64 0:1.28.2-0 will be an update
- l, U( ~3 X6 Y  o---> Package kubelet.x86_64 0:1.28.0-0 will be updated1 U$ P+ }3 R6 \, Y
---> Package kubelet.x86_64 0:1.28.2-0 will be an update/ o$ S  S; f; m7 h+ M. O+ l6 b. y
--> Finished Dependency Resolution
& o+ ^# h8 T- y0 ^+ Y: M0 R( q# e1 {, p8 r. P8 Q* @9 B
Dependencies Resolved
' V( J& z) I, a; o1 M% F6 X3 W/ t2 s$ {9 h8 |4 b# R# y0 F4 {
================================================================================================================================================================================================================================2 m5 o; `$ C- R. Y& e8 l0 f
Package                                              Arch                                                Version                                                 Repository                                               Size5 N$ X0 Z- \" b7 R/ P; z& M% }
================================================================================================================================================================================================================================& j8 S; ?8 m2 @
Updating:
" w9 D( H/ F) E. K kubeadm                                              x86_64                                              1.28.2-0                                                kubernetes                                               11 M- n) L& n4 N& u
kubectl                                              x86_64                                              1.28.2-0                                                kubernetes                                               11 M
7 g/ y- L4 _2 @9 d kubelet                                              x86_64                                              1.28.2-0                                                kubernetes                                               21 M
; |' w* C5 V+ A' q  Y9 t& k( I8 D$ \2 E
Transaction Summary
0 J6 |; E! d/ j8 O" c================================================================================================================================================================================================================================
" g/ d% G/ y" nUpgrade  3 Packages
8 K6 N! z: T. [' D* u7 V  j) i& o2 a1 s+ ?) n9 D
Total download size: 43 M) z2 S2 g- T5 Z# f
Downloading packages:
7 @+ B. j1 o2 {, u( h; \6 w; T% MDelta RPMs disabled because /usr/bin/applydeltarpm not installed." v9 K: A7 I; q+ }
(1/3): a24e42254b5a14b67b58c4633d29c27370c28ed6796a80c455a65acc813ff374-kubectl-1.28.2-0.x86_64.rpm                                                                                                      |  11 MB  00:00:05     + e. T# H+ W5 g2 {& f' a: P7 y$ o
(2/3): cee73f8035d734e86f722f77f1bf4e7d643e78d36646fd000148deb8af98b61c-kubeadm-1.28.2-0.x86_64.rpm                                                                                                      |  11 MB  00:00:05     
% t  G' k1 A: |2 u(3/3): e1cae938e231bffa3618f5934a096bd85372ee9b1293081f5682a22fe873add8-kubelet-1.28.2-0.x86_64.rpm                                                                                                      |  21 MB  00:00:05     
1 b. T1 C( R6 e7 O--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------& x4 s$ w. O2 C  G% U
Total                                                                                                                                                                                           3.8 MB/s |  43 MB  00:00:11     " ?9 P2 j2 Z, X; H
Running transaction check: }$ O) c( Q- \
Running transaction test: o$ z4 t' l" \- d
Transaction test succeeded
) c# T% l' u9 n$ }. t) a# ?Running transaction% z/ M1 X# {* f. L; w+ ]9 S
  Updating   : kubelet-1.28.2-0.x86_64                                                                                                                                                                                      1/6 ( x# Q3 s1 p( z" G' t+ J  M4 }
  Updating   : kubectl-1.28.2-0.x86_64                                                                                                                                                                                      2/6
) Y% n  w* C- s" |2 k0 C0 |  Updating   : kubeadm-1.28.2-0.x86_64                                                                                                                                                                                      3/6 & S+ x7 _' \/ s) o& ?- J7 E
  Cleanup    : kubeadm-1.28.0-0.x86_64                                                                                                                                                                                      4/6
' t' I- x1 [* T; F4 w. J  Cleanup    : kubectl-1.28.0-0.x86_64                                                                                                                                                                                      5/6 7 S2 s) O# x7 b. x
  Cleanup    : kubelet-1.28.0-0.x86_64                                                                                                                                                                                      6/6
2 c: h6 m5 X* Z3 O  x3 f  Verifying  : kubectl-1.28.2-0.x86_64                                                                                                                                                                                      1/6
1 C% b: ^: B" Z  Verifying  : kubelet-1.28.2-0.x86_64                                                                                                                                                                                      2/6 6 A( T" n. q* |
  Verifying  : kubeadm-1.28.2-0.x86_64                                                                                                                                                                                      3/6 + D6 Z7 \) K4 V. x
  Verifying  : kubectl-1.28.0-0.x86_64                                                                                                                                                                                      4/6 ) q1 @' j0 y' }! H& w
  Verifying  : kubeadm-1.28.0-0.x86_64                                                                                                                                                                                      5/6
1 Q$ z3 |  t  O/ k  M' n7 i  Verifying  : kubelet-1.28.0-0.x86_64                                                                                                                                                                                      6/6 $ n+ v& C. B2 H9 k
* b! `/ K& h5 ~
Updated:
6 E+ E7 d/ ]" ~) X  kubeadm.x86_64 0:1.28.2-0                                                 kubectl.x86_64 0:1.28.2-0                                                 kubelet.x86_64 0:1.28.2-0                                                ) u4 d; Z+ R; [9 H$ t2 o' H: F) o5 h
0 {0 k; A% ?# f& e$ N
Complete!; j6 `* B+ ]6 O5 Z1 Y0 L
7 g9 k  w4 u0 t! }8 O

( \1 p3 X% V! Q& B1 y3 K1 o1 I$ x: d. O5 x
添加开机启动:
: V* g9 I% _$ ?1 M. q( |/ p6 B( p/ h" X! [7 J* p+ n8 x; C/ ~4 |3 e
systemctl enable kubelet.service
; B; ?' |+ N( l/ U
' P& @3 v3 w+ I9 j( C; Q9 V- e5 ^9 ]# @2 J- ^8 T

. p8 ]5 A1 J( E+ @& v- q0 C1 n查看需要的镜像:
& j' }* f0 i; J
* @% V3 }8 m# T4 N[root@kubernetes-master ~]# kubeadm config images list4 i4 {/ G, r) D! ^4 a$ U
I0918 14:09:39.041429   30436 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.28
. W, u7 O8 R' wregistry.k8s.io/kube-apiserver:v1.28.14
) b/ Q$ k, _- t7 q. a* P, Y8 E- Aregistry.k8s.io/kube-controller-manager:v1.28.14
: W7 u: E& m' N5 H- H( s$ oregistry.k8s.io/kube-scheduler:v1.28.14
, ?/ S2 v7 z3 u9 \' ?# [registry.k8s.io/kube-proxy:v1.28.145 S, v6 f! s$ d  v
registry.k8s.io/pause:3.9: b4 N' ~3 b! F8 `: M. ?
registry.k8s.io/etcd:3.5.9-0% w2 W3 X+ A$ U
registry.k8s.io/coredns/coredns:v1.10.15 q4 U8 R+ X- V' M- z$ }

6 t/ e" x% T  u9 n
1 l  i  ~) u  E  y
. b" E1 W! d! M! C( t
3 G) z9 C- I& ]& x0 k6 z八、部署集群,初始化kubernetes集群  L1 o" V; f# }9 F
初始化kubernetes" _( i) o' x6 l7 B2 B+ a

, ~! t8 h; s' B4 j. B1 Q" m, ~# 执行 kubeadm  init 命令* G- Q0 Z5 `2 L' e- H* q3 i/ O- m' @

5 q, w: s3 ]8 y+ K- z1 l% H2 W初始化完成后,根据提示信息,拷贝kubectl 工具认证到指定或者默认路劲* L0 o1 c) A! Z
--kubernetes-version=1.28
. k( m" H$ _% y. E; w4 D指定要安装的 Kubernetes 版本。
. S% G3 X' a. y4 i8 h; _& R( x; U--apiserver-advertise-address=x.x.x.x9 n7 f- s+ k5 s$ m- O2 a
指定集群master节点的IP地址,即apiserver所在节点的地址,并告知其他组件、节点apiserver在哪。2 p8 L: }9 o% k
--image-repository registry.aliyuncs.com/google_containers& p2 x8 P1 ]' k6 q$ Z
指定用于 Kubernetes 组件的容器镜像仓库。
5 W, H7 Y5 t' S9 y--service-cidr=10.10.0.0/167 y* o- L; G; i+ l/ \% Q
指定 Kubernetes service的IP地址范围。1 x% V2 }2 T1 k- H" z, B
--pod-network-cidr=10.122.0.0/166 w+ r# N, ]3 x9 `1 @
指定 Kubernetes Pod的IP地址范围。- R5 R' y" V: r: T9 b: r
总的来说,这个命令将初始化一个版本号为1.28的kubernetes集群,并将172.31.246.16用作master节点,同时指定service和pod的IP地址范围。
, Q0 l; @: m- U& K
2 a+ T, k3 g, r, q" q( B  k
% O2 U/ z! ?; K- \9 }在主节点上执行初始化, m& K( W, H7 N& @
kubeadm init --apiserver-advertise-address=172.24.110.182 --node-name=kubernetes-master  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.2 --service-cidr=100.177.100.0/12 --pod-network-cidr=100.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock
" W" }9 y3 S" `1 i7 |1 [
8 h' u9 s5 G! h9 c5 u: Q) R示例:
0 Q) F  b7 b8 g5 n( H. W) |
0 w% X; ]1 d, r" ]1 ~[root@kubernetes-master ~]# kubeadm init --apiserver-advertise-address=172.24.110.182 --node-name=kubernetes-master  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.2 --service-cidr=100.177.100.0/12 --pod-network-cidr=100.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock
6 w7 ?' C! v) b+ n[init] Using Kubernetes version: v1.28.2
$ W, g0 \% A# I) b! O$ e% o$ O[preflight] Running pre-flight checks
* ]6 a5 K9 E5 L1 _. R" h# N8 m% m        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service', D5 m0 g% I& X6 _, D
[preflight] Pulling images required for setting up a Kubernetes cluster4 z% \# ?( E& }2 }1 t1 {# }( y
[preflight] This might take a minute or two, depending on the speed of your internet connection: j5 i  ?( v. O# X+ r
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
" S$ W: o+ V/ @3 e  z/ d- j! P% W7 i( ?) X) O

% S/ _/ I1 r/ p+ E1 h; ]+ l/ z+ |: H9 u- y2 L
[certs] Using certificateDir folder "/etc/kubernetes/pki"* W3 }, g* B! u
[certs] Generating "ca" certificate and key  L+ C+ Q; _0 U+ v/ G
[certs] Generating "apiserver" certificate and key' L& C  [% y  g& x$ z+ f. X
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [100.176.0.1 172.24.110.182]3 O/ I) o: r, N5 ^6 w
[certs] Generating "apiserver-kubelet-client" certificate and key
+ T( m  Q% S, [+ {6 M' Y[certs] Generating "front-proxy-ca" certificate and key
4 P0 D# }/ I' w9 V[certs] Generating "front-proxy-client" certificate and key+ I+ y6 \/ I( n; A& q# y
[certs] Generating "etcd/ca" certificate and key- z: Y# u4 `. ?
[certs] Generating "etcd/server" certificate and key* R7 d3 Q* ]: ?  a
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [172.24.110.182 127.0.0.1 ::1]9 X9 m6 [+ K/ n8 I( v, r
[certs] Generating "etcd/peer" certificate and key
; ^" V' @4 F7 x9 b7 M) q[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [172.24.110.182 127.0.0.1 ::1]
( j) v4 I, P. |, T( ?% h! [! p4 K[certs] Generating "etcd/healthcheck-client" certificate and key
" X6 u, ]% X- s# b! ~% B[certs] Generating "apiserver-etcd-client" certificate and key6 e5 f$ c+ J2 L& l
[certs] Generating "sa" key and public key
* g/ H4 v4 ^- J. T+ |[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
; H" T# r8 ~+ @9 s[kubeconfig] Writing "admin.conf" kubeconfig file
- a* I& T" N6 P" y8 S. Q4 k4 S# g5 r% S[kubeconfig] Writing "kubelet.conf" kubeconfig file/ a8 f" a# d/ r# e+ r, a, q
[kubeconfig] Writing "controller-manager.conf" kubeconfig file& h/ p) e$ |- ^9 A3 \
[kubeconfig] Writing "scheduler.conf" kubeconfig file
2 B. N& b9 b+ s+ ~5 Y[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"  j$ H; X  E, d2 G/ V2 F
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
4 U3 u3 Z) c% M: u( n[control-plane] Creating static Pod manifest for "kube-apiserver"
2 M" r" ~3 D. c' q3 R[control-plane] Creating static Pod manifest for "kube-controller-manager"8 n: |  O. y8 f& c/ F0 \
[control-plane] Creating static Pod manifest for "kube-scheduler": W8 Q! A# f3 O
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  ?/ }2 f7 E( u" _3 I' w/ t; A[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
2 n; D, j7 W  h0 ], J$ g[kubelet-start] Starting the kubelet( L; d9 \4 |5 R
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
% @. K3 f5 h9 \" P- D[apiclient] All control plane components are healthy after 10.505264 seconds
3 o% n, a. n7 S3 L$ H8 R[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace: K4 B$ r' j9 j
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster  s% ]  @  T; s
[upload-certs] Skipping phase. Please see --upload-certs
# b4 g  w; @# t' @* J[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]! B* Q( N) ~5 a1 [" S* a# f8 u
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]) N, D9 u; x" X1 K6 \1 Y. G8 k5 Q
[bootstrap-token] Using token: 0fqjub.taqnhr1lskcovh7d
1 [$ Z& Y7 ?$ Q9 |9 m[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
5 u1 B. R: B1 m8 C8 I[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
" x7 ?9 I$ ]1 W. u$ `# Q: y5 u1 x[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
/ b0 j' N1 \. {& G7 g5 _[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token: B- f! w5 u. k9 L2 r8 e) _
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster- |) a- c' L+ x  C6 v6 W. x
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
! x5 n6 H- l$ x% `0 b$ S* X[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  C( f+ c- S5 u5 F' S( M2 B[addons] Applied essential addon: CoreDNS
) G6 u7 @# c6 i2 W" a  R# V# q[addons] Applied essential addon: kube-proxy
: T/ v' Z) V: Q( Z, K, n2 q% C+ f% D) p# u( Q
Your Kubernetes control-plane has initialized successfully!8 h* f6 Y% g: w7 z0 }* F

1 [- j# ~0 ]; K! QTo start using your cluster, you need to run the following as a regular user:7 y2 q2 x9 y$ J4 j# U
3 c) v; I. x' p
  mkdir -p $HOME/.kube
& c- D& `, }' ~8 N3 t  q2 F  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config! [! r6 i/ B  L# ^
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
5 c6 K8 \5 k* v$ W& {- E# b' D: ?, c, B) `3 s) u
Alternatively, if you are the root user, you can run:# G; q. W# w; T0 d5 J# q4 ?

) A/ P! N1 l) C  export KUBECONFIG=/etc/kubernetes/admin.conf
2 \: l: X0 O6 }$ ^+ S3 ]9 W
$ w+ b. v' G) }0 S3 S3 vYou should now deploy a pod network to the cluster.
0 b7 q" U4 M7 e# h% YRun "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
+ N4 a. z5 |. j1 B  ^8 {8 f: i  https://kubernetes.io/docs/conce ... inistration/addons/$ H+ ?; I1 R! O! |
/ y( N* m, Z. b7 o( H# b: q
Then you can join any number of worker nodes by running the following on each as root:/ J+ [( L' U1 D( T/ |4 g! O
2 L2 F! i# Q% W
kubeadm join 172.24.110.182:6443 --token 0fqjub.taqnhr1lskcovh7d \
6 q9 i" j, u( x* m        --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd
, e9 E% o4 @) _9 T' I2 W- ^( W' t
2 S# N! `) y% M/ j9 i9 `0 T

4 E3 }4 E1 o/ h! m% D初始化完成。( y+ ^0 ]/ b6 G5 |' E2 b; L

* R* j  D, Q, y: E6 e& S5 M' |, Q/ L; }
相关镜像:
8 o1 t5 M. S" ~% W
; r' J# L+ c' `  n, H" `# docker images% m; V# r- f$ G" `3 q
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE+ C! C3 Z! M+ c# X+ C. R
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.0   bb5e0dde9054   13 months ago   126MB
8 I% H8 m, ~8 [/ F$ G; a, oregistry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.0   4be79c38a4ba   13 months ago   122MB
4 V1 ?& f# I5 w! hregistry.aliyuncs.com/google_containers/kube-scheduler            v1.28.0   f6f496300a2a   13 months ago   60.1MB# t. |4 J1 [/ g. }4 O" Z  ]; j
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.0   ea1030da44aa   13 months ago   73.1MB+ _$ W& j, {7 K0 N* N2 O' z! m
registry.aliyuncs.com/google_containers/etcd                      3.5.9-0   73deb9a3f702   16 months ago   294MB
" f( I1 D0 M1 o( j4 a3 I+ `registry.aliyuncs.com/google_containers/coredns                   v1.10.1   ead0a4a53df8   19 months ago   53.6MB
0 I  |* |2 _: v& }. z5 S* wregistry.aliyuncs.com/google_containers/pause                     3.9       e6f181688397   23 months ago   744kB! f! Z* T( T0 w4 i& t  g
" b% T/ h1 L" s1 D2 x4 c4 Z; P
: ~0 P  F' }! r. d
master节点配置:
8 Y2 W& i* E& s1 e6 k+ F& z检查kubectl版本:- I5 H$ w% G5 s& x3 @7 O6 Q

0 L! |1 I! p" }5 \7 x4 K# kubectl version --client- t# T2 e& E! m% p5 X

! M1 s  _  J( i- n% f$ Q% A3 mClient Version: v1.28.2
  k) m# C1 w7 d6 {) N7 HKustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3. O0 N8 j; d* x; r

" u1 T. A, t! `' l& s4 I: c6 x' X/ Y+ d6 }9 y1 L

- l' [0 Y4 ?  @0 m, w5 E" Q
0 r9 }3 |' T# c' x) s/ ~8 }0 ?# 初始化成功后运行下面的命令' ]4 l0 H- h$ b% Z* l
mkdir -p $HOME/.kube4 _) [7 X  q- C7 @& A

* E7 s3 s+ o" J
1 _8 K* j! ?6 lcp -i /etc/kubernetes/admin.conf $HOME/.kube/config. W% F* J1 o9 ?, n2 Q" K% Q: H5 J
( B0 K1 G, |( ^, Q$ v
chown $(id -u):$(id -g) $HOME/.kube/config
$ s+ i, a% x# O) Z1 l* D1 ]7 v
0 n) _. X# c( s1 L( f1 w; u" P查看kubectl查看状态:
( {: U) M  j- ]. A/ r' _, ]& L+ W) n$ o" b: N# [
[root@kubernetes-master ~]# kubectl get node) ^& s& N: m% H% K' P
NAME                STATUS     ROLES           AGE   VERSION- R/ y7 v# t! f/ K* k" M& C
kubernetes-master   NotReady   control-plane   19m   v1.28.29 r" T4 W' Y2 n( ]- _: f! v

* p6 g) I! E* U/ F. P+ Q! r. J; {) M' ^0 e" O- E0 z. r
注:因为网络插件还没有部署,节点会处于"NotReady"状态。9 C6 Q7 J6 V* H$ @# R

3 O+ D! m) H/ M. w; k9 K6 ~4 T查看kubernetes依赖的镜像6 M3 g6 ]2 u4 c7 M

9 {' Z9 `# |0 W8 t& v% A4 @* Q[root@kubernetes-master ~]# kubeadm config images list
8 ~+ N  L, j: HI0917 15:50:35.949562   30410 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.283 J& J5 x! z8 }7 l+ u$ ]# j" R3 T* b
registry.k8s.io/kube-apiserver:v1.28.14& E4 P1 L$ w& T$ M. i7 D3 y
registry.k8s.io/kube-controller-manager:v1.28.14) _( l; C8 q' p( \8 n# E
registry.k8s.io/kube-scheduler:v1.28.14
" x( k! _2 ^5 J; E+ gregistry.k8s.io/kube-proxy:v1.28.14/ |4 C. W( s, k  E* w
registry.k8s.io/pause:3.9: ]  \/ r0 b( G; g9 ~* d2 R
registry.k8s.io/etcd:3.5.9-0: }1 w6 C! V' S" M  s/ {9 O! h
registry.k8s.io/coredns/coredns:v1.10.1
# ^! S; t" j  q" p
/ o! ~) Y( L: L3 t9 M
, |6 S, x9 V# c6 [' Z# master节点执行 配置文件的复制(为了在node节点可以使用kubectl相关命令)9 g( u# [* E1 `( c% F' c2 t
        scp /etc/kubernetes/admin.conf 192.168.8.190:/etc/kubernetes/
8 o" l# g. b& b        scp /etc/kubernetes/admin.conf 192.168.8.191:/etc/kubernetes/; a$ e! ^8 z+ @7 @7 k/ J2 e# g. {- q
        scp /etc/kubernetes/admin.conf 192.168.8.192:/etc/kubernetes/0 C- ^+ B3 C/ P
为保持权限正常,可以通过rsync的方式同步' n0 D& W" _1 _

4 }, E) x' f+ K' C6 ~
+ g3 z- S3 [9 M. P& x- {
7 |  i2 j5 J- p& m* Z' @. t[root@kubernetes-master ~]# rsync -avzP -e 'ssh -p 22' /etc/kubernetes/admin.conf root@172.24.110.183:/etc/kubernetes/8 @9 q8 l# g; M. e( m' `
ssh: connect to host 172.24.110.183 port 22: Connection refused
; I2 _: P0 d- R& J$ ^( @8 `+ W8 a. zrsync: connection unexpectedly closed (0 bytes received so far) [sender]8 @( P7 b: [2 n* z
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.2]% p0 U1 |2 H% F. [% l& W! n' W
[root@kubernetes-master ~]# rsync -avzP -e 'ssh -p 60028' /etc/kubernetes/admin.conf root@172.24.110.183:/etc/kubernetes/
( v9 ]9 R( H2 y6 ]* JThe authenticity of host '[172.24.110.183]:60028 ([172.24.110.183]:60028)' can't be established.2 ]: n) g0 O+ R
ECDSA key fingerprint is SHA256:Tvzi0ICzurMYEPySzerkOmwk/o7XHxmABVKRigofHzg.. b9 l8 d' Y% n. j/ i5 ~
ECDSA key fingerprint is MD5:f0:92:26:fd:da:d3:e4:db:be:36:b1:fe:d6:2b:65:25.1 |# j% ^3 ?3 X+ w5 H. F' B9 a/ `, D
Are you sure you want to continue connecting (yes/no)? yes
& U3 [/ h7 z( {7 y" ]5 X' y7 o+ `. GWarning: Permanently added '[172.24.110.183]:60028' (ECDSA) to the list of known hosts.
: P/ d; y$ k7 o5 u, d% U2 Qroot@172.24.110.183's password: 2 s" U- z; z+ I! b' n% A3 o
sending incremental file list/ u3 I* v4 Z5 a# c
admin.conf
) N" }2 }6 O! y7 x3 d          5,646 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=0/1)8 @2 G0 p, [  L$ q5 L/ {: z
4 `+ z& F" a7 O& P3 N7 B+ V
sent 3,920 bytes  received 35 bytes  168.30 bytes/sec
3 ]4 ~4 v4 h" T( P& Itotal size is 5,646  speedup is 1.43) x8 n! |; r6 O0 J' y8 X
[root@kubernetes-master ~]# rsync -avzP -e 'ssh -p 60028' /etc/kubernetes/admin.conf root@172.24.110.184:/etc/kubernetes/1 m# H% `, P. K/ b' W7 K; i
The authenticity of host '[172.24.110.184]:60028 ([172.24.110.184]:60028)' can't be established.
5 ^; |: e; s3 y- q# M) UECDSA key fingerprint is SHA256:Tvzi0ICzurMYEPySzerkOmwk/o7XHxmABVKRigofHzg.
1 Z! h* u) `5 n4 ?. ^6 V! a. V% AECDSA key fingerprint is MD5:f0:92:26:fd:da:d3:e4:db:be:36:b1:fe:d6:2b:65:25.
- C- B) \% w0 IAre you sure you want to continue connecting (yes/no)? yes
1 Y# G) I5 ?6 c7 ~# kWarning: Permanently added '[172.24.110.184]:60028' (ECDSA) to the list of known hosts.* y- {3 v! r! c1 m1 O7 n
root@172.24.110.184's password: - C6 L. h, `# P
sending incremental file list& `' |2 L8 V" s: q! A+ ~
admin.conf
" |( E; H( G5 L! j  {) }2 C% s          5,646 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=0/1)/ z& E2 [3 g& K

+ ^/ z; t; o" t1 P1 ?sent 3,920 bytes  received 35 bytes  878.89 bytes/sec
2 z$ \% h' _; f' P6 z6 e- Ltotal size is 5,646  speedup is 1.43
- b$ \/ k9 Q% P& V
! p$ N5 w1 ?: m, E. Z1 V$ e! O, R' J

' J8 x, I5 g* Q9 ^0 H3 M% i$ S将node节点加入集群(去node节点执行):. N  f; A1 {( v: Z  ?, g
执行上述输出命 kubeadm join 命令,将该节点加入到kubernetes集群中:
3 t4 W) d& B8 G4 i. p3 U# O9 y" K! @, b
kubeadm join 172.24.110.182:6443 --token 0fqjub.taqnhr1lskcovh7d  --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd  --cri-socket=unix:///var/run/cri-dockerd.sock
3 `4 y5 s0 v' p' o% D5 u4 [  k6 p
2 p5 Y; H) J  g/ A! H重新生成token值:
7 `; w6 r( |7 c& A! [
0 j0 h5 S3 \8 k0 A[root@kubernetes-master ~]# kubeadm token list
2 [+ }0 X/ W* Z. N  a  O" h+ |- E/ eTOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
5 H1 F0 }# u0 A, o0fqjub.taqnhr1lskcovh7d   23h         2024-09-18T07:21:06Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token4 g1 ^; z# \- p6 e8 b! P; \
[root@kubernetes-master ~]# kubeadm token delete 0fqjub.taqnhr1lskcovh7d
6 G% x0 ^! p7 H7 h- X9 vbootstrap token "0fqjub" deleted
8 w. t) Y+ m8 ~6 d6 j
' M, d8 k2 [! i  |" d( ?[root@kubernetes-master ~]# kubeadm token list & s5 D# R- ~  C) ]
# Y- g$ O5 `: s5 o
创建一个十年的token6 ^" V, i3 z: s$ J
[root@kubernetes-master ~]# kubeadm token create --ttl 36500h1 ~% g) C( C' ^; N
pllb0d.eyjtekjjc542k16c& N8 A6 d1 e) q/ C' R$ i" K. X
创建一个5年的token( N/ {+ s& H& W5 f/ t7 A5 k1 X% e
[root@kubernetes-master ~]# kubeadm token create --ttl 18250h3 P! R( T! x7 i8 t7 P- N
gpz9o9.terifm9742ermj6e" r2 ^7 F+ c9 g% s' O- Y; `
, h# f  X4 d. D/ F
创建一个永久的token. n& o: d7 u" \6 S
7 B) n; @8 I, Y6 b
[root@kubernetes-master ~]# kubeadm token create --ttl 01 K, k! j7 ~' r: h
nt8qzn.bb4tm414rnww2mt2
) w2 H( F' {! F$ H( `! C" c5 M% v, z3 K2 B
0 t7 `& L0 M& B& E* `

2 p5 \6 {: d8 |删除一个token:6 n/ z' i9 {- E2 E2 H
! e) t6 c% B7 t# j
[root@kubernetes-master ~]# kubeadm token list6 V4 z1 A6 F( V
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS  O0 [) ~: S# S% |
gpz9o9.terifm9742ermj6e   2y          2026-10-17T18:11:13Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token+ U# \* r" y* v7 C. s
nt8qzn.bb4tm414rnww2mt2   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
) n5 [" c9 r' k' Z. O. ~pllb0d.eyjtekjjc542k16c   4y          2028-11-16T04:10:37Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token4 N: ^+ z+ x) X; G; ?& L. h) H
[root@kubernetes-master ~]# kubeadm token delete nt8qzn.bb4tm414rnww2mt2 & y7 Z5 j. d. l2 e# `1 p' H
bootstrap token "nt8qzn" deleted2 c6 `* `! W4 S$ O8 t8 ~4 Z0 K- ]
! t* x7 I1 k0 b5 Y, A) l8 H% G
9 F: t& `7 b3 s9 D, Z3 M
获取 CA 证书 Hash 值$ M: K" g# S0 ~7 @2 v& |

$ i" Q3 D5 Q1 o! ^- ^3 V8 O[root@kubernetes-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
$ G/ Y/ ^5 H, ]% h: |( y" K) r* ]09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd
* G8 d' v$ f# \' M" |$ _  @
6 P% _# H- C0 y9 G" p# U, o% l2 s9 o) w$ \0 p

% m# H- ^& {! E( R, C  |" _3 I1 _' N. J( X/ G
或者这样生成也可以:
6 G, J. z: b, D5 `/ i( C, C9 b7 d; P. V' R
[root@kubernetes-master ~]# kubeadm token create --ttl 18250h --print-join-command
# X6 T8 ]2 t- l( \( m3 Vkubeadm join 172.24.110.182:6443 --token 1kis96.cklh7okui7j4fcr0 --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd
# [; s" f* L# z) G7 e6 Z' h
2 J, o( w# h5 G, K" z* K) v8 w8 r5 D" i% m! F0 n: t+ i" s
$ c# _/ n  d8 O0 g1 C* c/ b
正式执行加入其他节点,按照上面的返回结果执行时添加 --cri-socket=unix:///var/run/cri-dockerd.sock  参数即可:
1 I, E5 t, Z9 {# b  i kubeadm join 172.24.110.182:6443 --token 1kis96.cklh7okui7j4fcr0 --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd  --cri-socket=unix:///var/run/cri-dockerd.sock % V2 R4 ~3 `: T: A5 n4 D1 w( U

: B9 C: c* g1 k. C示例如下:* v: c. }! ]) N- V! _- P6 z
1 `/ f1 T9 e; n& Y
[root@kubernetes-node1 ~]# kubeadm join 172.24.110.182:6443 --token 1kis96.cklh7okui7j4fcr0 --discovery-token-ca-cert-hash sha256:09fc462e6d431bb00515cb001ebc5791f6197cf22d49a940000eb96c8d4085dd  --cri-socket=unix:///var/run/cri-dockerd.sock
4 D$ h% [! L- j: A( N& v* p7 ]9 b[preflight] Running pre-flight checks
4 i" x1 R' m  r* T) \% V. m2 `: U, }        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'. W1 H2 C! J: m# [3 Z' E' t  q
[preflight] Reading configuration from the cluster...5 `. A' M/ {1 P+ U' p) t# o$ ?
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'2 }" S; l! l% a& g4 v
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"; a; B. S9 B( G
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
0 v( P" V9 w6 ~8 q[kubelet-start] Starting the kubelet
9 f5 `! C9 p' t( [) A% ~. g[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...7 z7 y) O' V. j% H, L4 p( k

; ]+ C/ h; b/ z5 h& Q' p2 @$ cThis node has joined the cluster:0 F  ~" l- B/ y+ q( V& S
* Certificate signing request was sent to apiserver and a response was received.
; M# v- {5 B# ?  k! C' `$ w+ s' w1 S* The Kubelet was informed of the new secure connection details.
% c- N& C. H3 _7 ]# O/ m; R, @% N4 z* x; G5 N# H: O
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
+ s, j3 [2 I+ {. W- x$ |; M; g, G* G: q3 {' h
其他节点一样的执行,略。
: p+ Q3 ]5 D1 R" \1 R  ?+ p2 F; y7 z: d% {8 \2 l$ Z

# ~* |' s6 w( o, y8 L检查节点运行:
# v% E, ]6 r' R+ J
6 y) D, w/ Z3 N- F7 G+ Q[root@kubernetes-master ~]# kubectl get nodes
- s" l! {* p9 o1 DNAME                STATUS     ROLES           AGE   VERSION; V: \8 S7 o/ U
kubernetes-master   NotReady   control-plane   59m   v1.28.2: X9 i0 ~5 ?& z" T' h4 d
kubernetes-node1    NotReady   <none>          78s   v1.28.2
- D* N+ c! f+ o& h! c/ O) {; |kubernetes-node2    NotReady   <none>          69s   v1.28.2! C8 i! X: e; B3 K9 h, t; Y
3 f4 n& ]! \0 U; U
% ~$ O# {# {+ X) F$ L0 ~- q
注:因为网络插件还没有部署,节点会处于"NotReady"状态。5 ~! L. f) g6 e* c6 c

8 b+ I$ i) c3 b7 x: W+ X查看pod运行状态:
$ p8 ^8 E/ Q& \  Z5 ?+ r% L3 {, |+ ]7 @
[root@kubernetes-master ~]# kubectl get pods -A9 \. m% Z4 a# A  H
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
; l; F+ y5 `" {. k$ o0 h) ?# ikube-system   coredns-66f779496c-cqf5k                    0/1     Pending   0          60m+ b$ i0 ]8 l$ J/ K; ]4 \. T
kube-system   coredns-66f779496c-lnxt4                    0/1     Pending   0          60m3 F# N- |7 N1 D5 _
kube-system   etcd-kubernetes-master                      1/1     Running   0          60m" \5 W" q; f/ V
kube-system   kube-apiserver-kubernetes-master            1/1     Running   0          60m" h  v, ^* m3 R, W/ q  u! ]
kube-system   kube-controller-manager-kubernetes-master   1/1     Running   0          60m
9 a/ W# |- e  {2 q* M# S0 a' Ukube-system   kube-proxy-676dx                            1/1     Running   0          2m37s
* m; S) t( e( N( \2 r- akube-system   kube-proxy-kkt8g                            1/1     Running   0          60m; F6 c$ E4 E) }+ x0 x2 }- h
kube-system   kube-proxy-qgpbt                            1/1     Running   0          2m46s
# I. R' c: K5 ?+ l( c% ?kube-system   kube-scheduler-kubernetes-master            1/1     Running   0          60m
: S+ W4 g- G# t, {
; n* }1 c3 c; b$ P* v7 H: e: j7 r1 m2 z7 a$ z/ U3 t
安装网络组件:
( }5 U* \( m* H4 r4 |( n6 r( k. ^5 n' p
不建议安装kube-flannel的,安装calico.yaml
" L. N3 k, U; ~7 g9 Y5 D9 ^% O8 {
[root@kubernetes-master ~]# wget https://github.com/flannel-io/fl ... .2/kube-flannel.yml2 _9 p0 q. O9 h4 d
--2024-09-17 16:27:41--  https://github.com/flannel-io/fl ... .2/kube-flannel.yml& e$ W; s$ h& [/ v* Z
Resolving github.com (github.com)... 20.205.243.166
9 g6 I; s/ g9 _' E! V0 m" n, RConnecting to github.com (github.com)|20.205.243.166|:443... connected.
5 z" p6 f- F7 Z8 V+ z, ]HTTP request sent, awaiting response... 302 Found9 i; C# y. J# O6 q% D- F. A
Location: https://objects.githubuserconten ... tion%2Foctet-stream [following]1 h& K# D& M! O. I7 F
--2024-09-17 16:27:42--  https://objects.githubuserconten ... tion%2Foctet-stream/ h1 {. }. C& Z0 Y# z: \% Y5 a, ]2 D
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ...2 y5 z$ [0 Y( D8 X3 R
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
% Z7 v$ X9 B  NHTTP request sent, awaiting response... 200 OK
- A+ z6 X( ]5 S. x  y, uLength: 4459 (4.4K) [application/octet-stream]
8 v! \) L7 a. ~! I" w/ u- v5 l9 KSaving to: ‘kube-flannel.yml’: `2 n* J4 J- V/ }' l+ d
$ C: P* A( v+ G$ E
100%[======================================================================================================================================================================================>] 4,459       --.-K/s   in 0s      + I+ u0 |1 J; L$ k1 g

% S4 F! C5 Q) h8 a2024-09-17 16:27:42 (17.4 MB/s) - ‘kube-flannel.yml’ saved [4459/4459]7 \+ P: x& ?0 a8 B' R  I& Q
! U, v9 Y! E& S1 O. @
4 t# Q/ M$ m7 h- A. p
6 q# \. _3 x' q7 e- L$ e( c' x
执行安装(master节点)4 _* c3 ~8 R3 z1 M

1 f* v: R( ^' W% W
+ B1 D% t0 o# b[root@kubernetes-master ~]#  kubectl apply -f calico.yaml
- o0 C8 O$ d9 Tnamespace/kube-flannel created
  W% r+ b9 P. Mserviceaccount/flannel created
* q6 N7 z2 R2 |: T7 g8 sclusterrole.rbac.authorization.k8s.io/flannel created
$ m# r. ~1 |9 M! J# e- ]clusterrolebinding.rbac.authorization.k8s.io/flannel created
6 w8 [! C8 e$ C( X" ]% Dconfigmap/kube-flannel-cfg created
* ]% ]5 s: W" k: sdaemonset.apps/kube-flannel-ds created
' {# h/ W  j  p, p/ |[root@kubernetes-master ~]# # @5 U+ b8 Q/ S( a' P

% F' ]0 E. {4 _" B5 X, U. j1 m6 }
# r3 G, a+ D3 i5 a/ m/ M
* n0 I8 n% P2 \2 p7 w- P5 w; B6 x再次检查pod状态
3 Q+ W, T6 G- U0 F* m
) u0 J& A1 `+ A5 T  S( d, A2 }[root@kubernetes-master ~]# kubeadm token list
  K6 N$ d7 w) C/ G3 `- OTOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS" P) [& J/ @" s* ]/ D2 k' b
1kis96.cklh7okui7j4fcr0   2y          2026-10-17T18:16:02Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token+ K4 P6 r4 y& N
[root@kubernetes-master ~]#  kubectl get node9 y, @0 Z3 ]% S; z' h: o  i, O
NAME                STATUS   ROLES           AGE   VERSION
) l1 p8 ?; h9 s0 fkubernetes-master   Ready    control-plane   72m   v1.28.28 |" `* ?, e7 I, A% H
kubernetes-node1    Ready    <none>          14m   v1.28.26 p7 h- ~: g; v! d
kubernetes-node2    Ready    <none>          14m   v1.28.2
% W: n  w: F( f[root@kubernetes-master ~]#  kubectl get pod -A7 k4 W; i  ?  r: ?: _- r  B# K
NAMESPACE      NAME                                        READY   STATUS              RESTARTS   AGE) U5 x1 @) x* u7 q' J% x" S
kube-flannel   kube-flannel-ds-k6mpb                       0/1     Init:0/2            0          45s0 r! y0 f- n, C% S' [
kube-flannel   kube-flannel-ds-l68ft                       0/1     Init:1/2            0          45s% I1 Y% I" g! w6 l0 t& |! ]6 X2 M
kube-flannel   kube-flannel-ds-th9kz                       0/1     Init:1/2            0          45s: X. k" `8 ~% K! O
kube-system    coredns-66f779496c-cqf5k                    0/1     ContainerCreating   0          72m
! b, r8 ?) q1 j. a8 vkube-system    coredns-66f779496c-lnxt4                    0/1     ContainerCreating   0          72m
/ K& q# Y; H5 P6 Hkube-system    etcd-kubernetes-master                      1/1     Running             0          72m" N% S! U/ J$ `7 _/ k. g) J
kube-system    kube-apiserver-kubernetes-master            1/1     Running             0          72m. ^: H: d/ N) |$ s6 Y' s
kube-system    kube-controller-manager-kubernetes-master   1/1     Running             0          72m
; _5 Y) p! K; f! y7 _/ V5 |6 Vkube-system    kube-proxy-676dx                            1/1     Running             0          14m
2 ]# j, ~& b% H* v: W+ Lkube-system    kube-proxy-kkt8g                            1/1     Running             0          72m
$ J' o( m7 D) m! H% K# L6 Gkube-system    kube-proxy-qgpbt                            1/1     Running             0          14m
, r+ P5 \: `& z9 g1 d) ~kube-system    kube-scheduler-kubernetes-master            1/1     Running             0          72m
3 x3 a/ a4 @! g1 z2 m
% ~2 ], i! M  l8 v/ Z9 l* v# v9 B注:
/ e- v6 p$ K% p8 k; T" c- d; B
  ^$ ]7 }7 A! o2 e+ S7 \' y1 s      #worker节点是无法运行kubectl命令的,因为worker节点没有admin.conf文件
6 Z! u( v$ z6 t6 t: X      #若需在worker节点使用kubectl命令,需要将admin.conf配置文件拷贝到worker节点,再执行以下命令:! |+ S9 i5 n! c5 @$ l9 g" c/ j
     scp root@master:/etc/kubernetes/admin.conf /etc/kubernetes/
6 l" y5 J/ F+ p+ N      echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
) t* X& c( R  c; L$ ~' A- B' `/ \  d) [3 M

. B4 w9 t0 s$ h: F0 [* G! }/ G
. ^" c2 a+ ]8 V安装kubernetes-dashboard(master)6 v' ]3 \& }' V' L/ h/ _
/ K5 M% c# C. e& O+ P/ S
/ k; O, q5 I7 b- n
[root@kubernetes-master ~]# wget  https://raw.githubusercontent.co ... oy/recommended.yaml+ M, i* L( n8 g
--2024-09-18 14:34:55--  https://raw.githubusercontent.co ... oy/recommended.yaml& @' t+ e1 f0 H6 H) F
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...
- V: g: y3 s% q8 Z0 }& ^Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected./ T0 S  g" A% I
HTTP request sent, awaiting response... 200 OK$ Z; g4 q1 O4 G; Q+ d0 |
Length: 7621 (7.4K) [text/plain]
6 ~9 {5 Z0 Q4 B+ b' s* I1 GSaving to: ‘recommended.yaml’
# _2 ~! g6 u3 o- T/ P  h8 W/ p. ?  z! P' p
100%[======================================================================================================================================================================================>] 7,621       --.-K/s   in 0.001s  : ]% V! g$ @1 C# C' e

# O; C' g" k- Q' w* K* _0 U& u$ z2024-09-18 14:34:55 (11.2 MB/s) - ‘recommended.yaml’ saved [7621/7621]4 V7 @9 @1 h) Q6 d% _4 b  r* u

0 ]& J% w2 h, q8 {  S7 t[root@kubernetes-master ~]# ls
5 p) d# m2 ?% P; n. X+ W; Jcalico.yaml  cri-dockerd-0.3.2-3.el7.x86_64.rpm  kube-flannel.yml  qemu-guest-agent-1.5.3-ksyun.x86_64.rpm  recommended.yaml  sudo-1.9.5-3.el7.x86_64.rpm
$ e* P2 F; ?1 c9 h, t7 j
" z8 t0 b, W/ x0 f3 E7 ][url=]recommended.yaml[/url]
7 m/ k4 a3 e( |2 k# U
6 q% t+ A' p/ J- Q2 F$ ?! P- W
' f4 s# l& Y4 o. ?/ u8 z3 \4 f#编辑recommended.yaml,找到service段落,做如下修改
' a% q& ?; C2 l5 i: L% v! U( H) U#在service里添加nodeport,这样可以通过 <主机ip+port> 来访问dashboard
& @# i' `1 l4 Ovim recommended.yaml- y* P$ W8 \& ?, C, a' i  O, E; A

- A3 M- W6 r0 A- D# g0 h9 tkind: Service
: h$ Q% g# y9 B4 O$ C$ T6 eapiVersion: v1; C- @) u; K( Z- N& s
metadata:( A2 _/ {' N/ ~3 M9 C
  labels:" ^% \) q" W* A6 e
    k8s-app: kubernetes-dashboard
9 \- o2 R$ N/ P. X7 K' J  name: kubernetes-dashboard0 v2 a- U- O! [, R7 t
  namespace: kubernetes-dashboard1 t2 \- ?' y3 d' e# i% ^
spec:) p3 ~: ?+ `" ?9 x6 N
  type: NodePort  #增加此行,指定service类型为NodePort/ ?& s! ~, O2 x- J% I
  ports:
4 ]9 A5 O: v- K+ Q+ P5 I( T  J: ~: e    - port: 4432 z, u. V" j* [2 ]" j' M; M
      targetPort: 8443
7 \1 V* D1 a# B5 L# }) T5 I( j  c      nodePort: 32333   #增加此行,指定绑定的node的端口(默认的取值范围是:30000-32767), 如果不指定,会默认分配( {+ h6 Y) G! ]# K" s1 _
  selector:
, V/ X; W7 C% }  s    k8s-app: kubernetes-dashboard$ \. T1 u4 R# K3 `
; K7 n+ q& b: C- b8 e" B) y: v- @
# ~0 W; q5 P6 O5 b7 e
#创建danshboard
& w8 v6 K& f  `* p) k. Bkubectl create -f recommended.yaml# V) P; C( w' w" F4 F. u* w1 C9 }
; a- p4 ?. w2 [& E
[root@kubernetes-master ~]# kubectl create -f recommended.yaml
& _$ w3 ~2 h; O( Bnamespace/kubernetes-dashboard created
  k/ `1 b1 E) Aserviceaccount/kubernetes-dashboard created
# E# `5 R$ y/ V, w6 O- Lservice/kubernetes-dashboard created9 h$ I$ L4 l- i, S" n
secret/kubernetes-dashboard-certs created( {5 u/ {8 z3 a* @, j( B0 o4 V
secret/kubernetes-dashboard-csrf created, _8 ^/ W1 r! d2 g9 {- G
secret/kubernetes-dashboard-key-holder created
6 l. \  K- i# Iconfigmap/kubernetes-dashboard-settings created
! h0 ^( M3 B9 G* s1 N/ u, Mrole.rbac.authorization.k8s.io/kubernetes-dashboard created9 ?0 q1 N) R  k+ v
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
( c& o; \; E6 J/ W# @( Crolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created# x6 f4 b' R! K3 c7 O- _+ p& l4 o
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created) G- m& s- {. v; [  u& U
deployment.apps/kubernetes-dashboard created
6 w7 ~; K2 n6 C+ Tservice/dashboard-metrics-scraper created
0 v) q" I8 u+ q9 L' H) F- G$ sdeployment.apps/dashboard-metrics-scraper created
9 n: C7 O3 g% i! d5 V7 E. `- Q: J3 p7 _- N+ O5 P5 O! n

2 r# T3 C) Z4 _- u
  R: ~8 e1 d$ p5 X# E  F3 d5 }% G6 z  K# G1 o/ o
#查看所有pod
* b" P- Y8 u  j- y/ R) N8 Zkubectl get pods --all-namespaces' b; {* E1 X  m. m5 n6 y
[root@kubernetes-master ~]# kubectl get pods --all-namespaces, t! s- V' K2 k. D/ m
NAMESPACE              NAME                                         READY   STATUS              RESTARTS         AGE
% @1 Y& c& b, f5 M! R4 b( g. ?/ y8 nkube-flannel           kube-flannel-ds-k6mpb                        0/1     CrashLoopBackOff    263 (118s ago)   22h
  o) \  t9 g+ y/ @9 Q8 i3 }kube-flannel           kube-flannel-ds-l68ft                        0/1     CrashLoopBackOff    263 (100s ago)   22h
7 m$ U3 j5 K3 B) @; a3 ykube-flannel           kube-flannel-ds-th9kz                        0/1     CrashLoopBackOff    262 (4m ago)     22h
+ r4 h+ T% z* B4 v+ Nkube-system            calico-kube-controllers-7d64c8fdd5-c8klr     1/1     Running             0                24m7 e- w" [8 N2 S* N. Z0 s2 h
kube-system            calico-node-574ht                            1/1     Running             0                24m
3 B' }% m( B& e# ~* Fkube-system            calico-node-mgn28                            1/1     Running             0                24m
; X3 F9 h5 I4 j+ |9 `7 l  i1 {! Ckube-system            calico-node-nglnx                            1/1     Running             0                24m2 P  W0 t- [+ F2 }2 u& Y2 g
kube-system            coredns-66f779496c-cqf5k                     1/1     Running             0                23h
7 q" L3 \, b: w/ ?9 \! F, Mkube-system            coredns-66f779496c-lnxt4                     1/1     Running             0                23h: K. v7 g) R9 `% h# |( Z; ]
kube-system            etcd-kubernetes-master                       1/1     Running             0                23h
/ v& r" i6 B; Z/ {kube-system            kube-apiserver-kubernetes-master             1/1     Running             1 (12h ago)      23h4 k9 ]& c6 z0 I/ F$ K
kube-system            kube-controller-manager-kubernetes-master    1/1     Running             15               23h* F) z5 ~2 |; R
kube-system            kube-proxy-676dx                             1/1     Running             0                22h+ M  M; `( y, T& H" s
kube-system            kube-proxy-kkt8g                             1/1     Running             0                23h2 R* F9 c0 C, |" d$ w" O
kube-system            kube-proxy-qgpbt                             1/1     Running             0                22h
; p4 B% S9 z" G1 W8 T! m& tkube-system            kube-scheduler-kubernetes-master             1/1     Running             16               23h
! \) G! n$ y0 {& ^kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-bggwp   0/1     ContainerCreating   0                23s7 Z. x7 a" P  r+ u$ m# p1 z# @
kubernetes-dashboard   kubernetes-dashboard-746fbfd67c-8xbmk        0/1     ContainerCreating   0                23s
( P% d& J: [5 @6 f  ~6 ^2 L1 K. A% P( p
检查kubernets-dashboard状态:6 M4 T5 b* k8 H8 j. u- S

  y9 K) A" G, j/ @1 w5 Z2 O. ?[root@kubernetes-master ~]# kubectl get pod -n kubernetes-dashboard7 w" u+ c9 r2 x* {4 r
NAME                                         READY   STATUS              RESTARTS   AGE
! k. X$ U1 v4 O: L7 B7 tdashboard-metrics-scraper-5657497c4c-bggwp   0/1     ImagePullBackOff    0          2m36s7 o+ `2 B' ~; }) {
kubernetes-dashboard-746fbfd67c-8xbmk        0/1     ContainerCreating   0          2m36s
* M6 a+ A& c" A2 P% e/ ]+ B" I& s/ t- ^0 G) n1 [6 u0 @
( b+ F8 k% l: M4 i- ~
( U: |9 `, e$ Q! w1 `9 ]& H
ImagePullBackOff问题解决方案
) X) F( R8 D. b#查看该pod的详细信息
! t, T9 V+ ]7 A$ v: \; P$ r4 e+ B0 w$ a, v4 @& i! `
查看该pod的详细信息
3 E7 C/ e# Q+ O% w
/ y! o: F# G* C/ p( t[root@kubernetes-master ~]# kubectl describe pod/kubernetes-dashboard-746fbfd67c-8xbmk --namespace=kubernetes-dashboard
/ |* Q$ z5 a# S3 U9 NName:             kubernetes-dashboard-746fbfd67c-8xbmk
7 P& E4 z* Y. F  s1 m  c  RNamespace:        kubernetes-dashboard
! u8 {+ N6 h, J4 [5 H* n( ?& |' NPriority:         0) c, K, s5 E* ]
Service Account:  kubernetes-dashboard
* A; [: p1 L6 f7 n* LNode:             kubernetes-node1/172.24.110.183
3 \! [+ b: K% N. Q7 g; LStart Time:       Wed, 18 Sep 2024 14:45:16 +0800
. q6 l# E, o) |0 p0 V6 `Labels:           k8s-app=kubernetes-dashboard
& W! m8 \/ _7 _; R1 _' l                  pod-template-hash=746fbfd67c+ k) ^/ [4 e2 X& W/ p$ {5 s1 L
Annotations:      cni.projectcalico.org/containerID: 7651e89375fa07f03a7594f82dc3c5a14b4fb63afb6f85006dc7f1d5464ff625
' g; z$ s' _( \5 m* o3 u" w4 O                  cni.projectcalico.org/podIP: 100.233.129.65/32
3 }. V" \, j& V; x                  cni.projectcalico.org/podIPs: 100.233.129.65/32
2 g; t9 G6 a! e& i% d4 V6 |* V3 f  EStatus:           Pending( S3 `4 R+ w" f
SeccompProfile:   RuntimeDefault
2 x6 p3 ~7 N1 SIP:               100.233.129.65
" ?/ e! h% @2 s. O3 H0 u" jIPs:
7 Q3 l: [. P, f% T% `, Z  IP:           100.233.129.65
. V& H6 F9 w: B5 D9 A' bControlled By:  ReplicaSet/kubernetes-dashboard-746fbfd67c
. W- g& i! I; j. }) i: cContainers:
3 ~0 F( t& V- B8 K8 ^  kubernetes-dashboard:
2 N4 V1 G; I7 h    Container ID:  . X% V! y# A" @% N+ h
    Image:         kubernetesui/dashboard:v2.6.1( e/ R8 d1 z/ Q* R7 a& a
    Image ID:      
7 F: v5 q4 Q% y/ C1 T    Port:          8443/TCP
. k5 i" t% H6 U- T3 \  o    Host Port:     0/TCP
1 p9 g* t6 t: ?2 R# a6 \6 a/ J    Args:) r5 s1 |, g# i1 c( {
      --auto-generate-certificates
, S+ k% ^% Y/ g4 i3 }      --namespace=kubernetes-dashboard
4 M( C0 p, {* N! @7 r1 j/ l    State:          Waiting
& M& L" A: B5 H$ S7 n! T  r      Reason:       ImagePullBackOff# u1 ^, x7 r5 B0 h$ {/ W
    Ready:          False6 b8 I" L2 ?# x- v+ `" N$ }
    Restart Count:  0- P3 C1 n* O3 H0 |
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=35 e& e* F" d# N7 H0 H' \3 a) g
    Environment:    <none>
7 y7 i. l1 `# c: h. |% i    Mounts:
* s' x8 O( a/ L6 M/ v& O      /certs from kubernetes-dashboard-certs (rw)/ m% U% J* o5 ^. k# ]( ?# E$ H8 `
      /tmp from tmp-volume (rw)) [4 J- h$ H+ [% C& y: H" c! P4 E3 m
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9w2d (ro); t/ H& n0 R: [  b' u% o$ V, p# w
Conditions:' x" [' s- x" B: i
  Type              Status9 f$ _7 W! S9 D- Y# V- K* U5 z
  Initialized       True
, h& i) H# e7 G+ e3 p; u3 ^0 h  Ready             False
5 q- P' R7 N, I2 \8 Z  ContainersReady   False
" t0 ?! D+ P2 a  PodScheduled      True + d1 q. N" i: p) ?* O9 z
Volumes:  A3 @& J. y, `1 W" R% k
  kubernetes-dashboard-certs:5 X5 k, S3 z0 V* u1 C
    Type:        Secret (a volume populated by a Secret)
- h# W* t4 [' R: y: T    SecretName:  kubernetes-dashboard-certs8 d$ u1 f0 x) g* o3 A5 `
    Optional:    false# S, m! K7 @2 u
  tmp-volume:, g9 p$ o. {- H! U9 o' O9 m0 c5 t
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)( z) @! O, c" r5 N" d7 X
    Medium:     5 O% Y$ q) \: l1 [8 o9 A/ A" H& v+ x
    SizeLimit:  <unset>
  P5 o" i; G/ N+ F0 u: h2 j! {" R6 t% y  kube-api-access-r9w2d:
# `  g( T7 L& ~2 s1 K$ Y    Type:                    Projected (a volume that contains injected data from multiple sources)4 W) H2 L5 ~1 o2 U% {5 g
    TokenExpirationSeconds:  3607: x8 y* O9 A6 x- X0 W, @" o
    ConfigMapName:           kube-root-ca.crt0 W9 p7 F" ]% T9 _' w
    ConfigMapOptional:       <nil>
7 N8 s; ^0 W  a9 M, V: q* N    DownwardAPI:             true
5 t" R" X2 N6 l0 T+ L1 ZQoS Class:                   BestEffort6 ~' j, N+ P1 ~& f* Y
Node-Selectors:              kubernetes.io/os=linux
. D/ N. d0 h5 I4 f4 ^" STolerations:                 node-role.kubernetes.io/master:NoSchedule
- S7 Q+ ~6 H" K  ^+ A6 ^4 _/ l                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
! V4 n4 g8 q% W2 A                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
& p6 B5 R5 w9 x1 `9 t% _Events:
; _. g7 z6 G' n3 m  Type     Reason     Age                  From               Message. [: C' x: N2 ^9 W- N* W/ ^- t
  ----     ------     ----                 ----               -------2 K. l! ]: i' U* r! B' A1 p9 P) y
  Normal   Scheduled  8m43s                default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-746fbfd67c-8xbmk to kubernetes-node1
' K' u5 @( s5 g0 N  Y  Warning  Failed     34s (x2 over 5m55s)  kubelet            Failed to pull image "kubernetesui/dashboard:v2.6.1": rpc error: code = Canceled desc = context canceled! ]1 U8 s8 g- ?' j0 M6 Q: x4 W$ \
  Warning  Failed     34s (x2 over 5m55s)  kubelet            Error: ErrImagePull
8 p/ ~' t' V; B) G, e7 d# T  Normal   BackOff    21s (x2 over 5m55s)  kubelet            Back-off pulling image "kubernetesui/dashboard:v2.6.1"
) q7 I  N! M8 r) q" L7 \5 t2 i  Warning  Failed     21s (x2 over 5m55s)  kubelet            Error: ImagePullBackOff
# K- n$ ~# ]0 x$ R8 v( Y$ I  Normal   Pulling    6s (x3 over 8m42s)   kubelet            Pulling image "kubernetesui/dashboard:v2.6.1"
8 P/ ?9 q" b8 ]1 ~1 X# @
, g( P: d# R" x/ e4 i% E
- n5 C. q* X8 T7 m. x  d) ~+ c5 g* x$ k" k. K' V! c8 N! ~
% Z# V8 g3 X* s1 S  ?/ b
9 i" O. H8 [) t- u% p$ J
通过上述命令可以得到如下信息,其中有两个重要信息:7 Q1 h6 T$ e5 Q9 f  Z: U" h

7 u* j" q8 A$ z6 ^' {+ S拉取失败的镜像为:kubernetesui/dashboard:v2.6.19 E4 R% V' f9 d7 [
5 `/ C1 i0 i) K- [4 v" y" G+ z
手动拉取镜像:
( l% x; `  q8 i6 D
% f' ^# W  E" F7 o[root@kubernetes-master ~]# docker pull kubernetesui/dashboard:v2.6.1
- Q+ t% G" f! d) Sv2.6.1: Pulling from kubernetesui/dashboard
9 h$ s: B9 a& Z: s596ae5b8318a: Pulling fs layer
3 C9 l8 t; b1 u5 q+ Z5 D7 I596ae5b8318a: Pull complete 8 {* B$ Y7 c# K+ s4 I
b721c920bca6: Pull complete $ ^$ P' v0 ^" z( K6 {3 m: }0 t
Digest: sha256:290bebc3cd96c22b6f89e7b21f5c2b16ce5c275a0ec2c2de10e0d8b9dd1102899 V2 m/ I+ `) {: V
Status: Downloaded newer image for kubernetesui/dashboard:v2.6.1
) @8 W+ ~& t0 f& edocker.io/kubernetesui/dashboard:v2.6.1
$ w2 @3 w( l) D5 R) N7 R# S3 D. N: K4 e. V8 V& z; X

8 _- |' T5 n3 _! z2 k. e' X  P! B+ P+ |
#打包镜像
  u0 X* h$ j8 W8 }docker save -o k8s-dashboard.tar kubernetesui/dashboard:v2.6.1" j8 s6 j% v9 J( c2 m

% c, K0 g  x3 l9 d" Y1 r  }/ h: m
' X. V9 {. ^" K
. S) W- J( [7 M( K/ ]3 b5 W# \/ m, Z; ^% Q/ w9 u4 t
不再使用create ,而是使用apply更新:4 d: R7 k; y/ l/ _; ~( M# U) q

* U3 ?$ {  F* v; t& L0 B. _[root@kubernetes-master ~]# kubectl apply -f recommended.yaml
& l0 M# [. j5 R7 y! d4 E  Z0 F9 t2 p& f  
# C5 D$ a' Q1 z" r* r% IWarning: resource namespaces/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
* ~9 p* f! b4 ^: w0 f0 Qnamespace/kubernetes-dashboard configured
* q, o2 g  h4 E, X  }Warning: resource serviceaccounts/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically./ S% Z% u2 S, [' [; q8 Z1 _
serviceaccount/kubernetes-dashboard configured
2 w% ~8 A1 h  E% m6 \Warning: resource services/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
% A' ?) {) g! I6 Dservice/kubernetes-dashboard configured7 y3 j6 H7 n: c0 f; K. f8 D& ^  b
Warning: resource secrets/kubernetes-dashboard-certs is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.5 J9 B+ u! ]  h( {2 p, |
secret/kubernetes-dashboard-certs configured
+ s. x/ q; H! s0 y: i1 nWarning: resource secrets/kubernetes-dashboard-csrf is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
- V/ W! R0 u, qsecret/kubernetes-dashboard-csrf configured( K! F  v: T9 G" Y8 ?, `' |
Warning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
* X4 `) G' }, V; p* d: Hsecret/kubernetes-dashboard-key-holder configured! N$ o/ a1 s' O# |' ]7 P
Warning: resource configmaps/kubernetes-dashboard-settings is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
9 d6 X8 e7 [1 ~8 N. Z) g. S/ w$ xconfigmap/kubernetes-dashboard-settings configured1 }9 T: C: U+ i4 B8 v
Warning: resource roles/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
$ A( q6 D: e4 B) a) ^( l) Yrole.rbac.authorization.k8s.io/kubernetes-dashboard configured
3 R% f% J0 u( c" X; ]$ `Warning: resource clusterroles/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
9 F7 _! C* M1 N8 ?2 u5 a& T/ D7 Vclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard configured! j0 ?% k3 ~7 R
Warning: resource rolebindings/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
4 A4 C+ K: _: i% D# A8 Lrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard configured) J( ]6 t0 N  Q& j7 ?4 R
Warning: resource clusterrolebindings/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.' q7 F: M, `2 |6 @4 c; P0 Z
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard configured
6 D$ i9 {3 l8 t/ `* G( J* _2 |Warning: resource deployments/kubernetes-dashboard is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
7 m3 Y! T& G4 e! s; `2 u, ldeployment.apps/kubernetes-dashboard configured
5 q" H5 F: l! o! b6 aWarning: resource services/dashboard-metrics-scraper is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.7 L+ \" h% `4 m, A. V2 G
service/dashboard-metrics-scraper configured
/ _9 u) l+ Z* ~+ \Warning: resource deployments/dashboard-metrics-scraper is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.7 A/ I  o4 r  W; e; d
deployment.apps/dashboard-metrics-scraper configured7 f6 R1 S! B2 c

) j$ a9 a. S4 e; h# @, x) E3 V2 R. g) V6 i! q7 {
+ a0 M  r" M: K4 t) _2 q" a. R
' E% G- \+ w! X' h3 W

6 F. p+ B  I5 B* o8 L6 \8 Z+ r! t: U6 F8 ~8 A
* W5 ^# o- [$ D0 ^) |

/ @+ ~3 J2 Z0 `2 s* `7 n% y

recommended.yaml

7.44 KB, 下载次数: 0

 楼主| 发表于 2024-9-17 16:35:24 | 显示全部楼层
kubernetes命令补全优化
6 C; ^+ F/ W5 J; t' a# 加入~/.bashrc
+ o! K  V6 Q9 P/ C7 {vim ~/.bashrc& K! m" W! r; n
# 添加下面的' Z: A% V; C- x
source <(kubectl completion bash)
& G7 ?/ X5 X' C0 K3 Q/ ]! M
: ?' I, R3 M2 t% _( M, ]8 Ysource ~/.bashrc
 楼主| 发表于 2024-9-18 14:22:13 | 显示全部楼层
网络安装calico.yaml 组件:
5 I/ Q' o. a  I9 q2 R  Y3 E1 N/ V5 e[root@kubernetes-master ~]# curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O4 D+ o0 ^0 n/ p" _: ?, p
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current% Y! I% G  o8 S. U8 H' J  R% O
                                 Dload  Upload   Total   Spent    Left  Speed- C0 z1 R& r' b. x. w0 Q% U
100  184k  100  184k    0     0  98813      0  0:00:01  0:00:01 --:--:-- 98793
6 w$ L# s- l% J- @[url=]calico.yaml[/url]
% b! U2 S/ n. m1 F3 J& P# e& L/ Y6 `# s- F2 w% k9 L

: i# n4 O: ]' Q8 g' a) k  G+ A[root@kubernetes-master ~]# ls# E4 R( ?7 f  o5 p! H
calico.yaml  cri-dockerd-0.3.2-3.el7.x86_64.rpm  kube-flannel.yml  qemu-guest-agent-1.5.3-ksyun.x86_64.rpm  sudo-1.9.5-3.el7.x86_64.rpm
. ?2 o" B( _1 B. b; S[root@kubernetes-master ~]# vim calico.yaml
" l' Z+ g" e$ c0 p[root@kubernetes-master ~]# kubectl apply -f calico.yaml 8 M& z! W3 B  E' ^
configmap/calico-config created: u9 S+ Q2 _! `6 M
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
0 v% C0 D0 |( U5 Ecustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
5 I( e! r+ E: A- d. |- vcustomresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
# a( s+ _4 M5 i3 Rcustomresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
, w0 a. k$ u( e7 s3 {! [6 `! {6 {customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created7 w) [1 k* G7 D, _% f$ d
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created, R6 q- U$ F. X$ k0 @4 P; d
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created4 w# V) Z# H5 N/ x, O4 B4 K5 L
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
% Y* E1 E0 B- [2 ]- u; M2 V* c4 `customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created, o- _/ M$ {7 t# O& C
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
, y0 y4 a8 ]/ o/ q6 S- E" i& S* m' u2 @customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created& h6 l  b' W0 G& Y
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created! A6 S* T2 }8 b- E- k* u( F
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created% g! Y8 q  F% A6 g4 W
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created$ r# |" R; D2 t" E4 I
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
/ w2 S# e: C% P0 X  |$ Pclusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
1 o0 u# L7 R/ i+ e% O% c9 w$ Cclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created/ f! r6 z* x, r4 |( I3 ^
clusterrole.rbac.authorization.k8s.io/calico-node created
# |" v! v& S) n: Q7 Eclusterrolebinding.rbac.authorization.k8s.io/calico-node created
5 x$ a% A7 l# ^daemonset.apps/calico-node created, M* w# O8 _/ v
serviceaccount/calico-node created; Z* r: h) D$ B" A' m
deployment.apps/calico-kube-controllers created" k6 a9 D# w1 |2 Q9 h1 S
serviceaccount/calico-kube-controllers created9 s' |; {9 r4 ]2 `5 p
error: resource mapping not found for name: "calico-kube-controllers" namespace: "kube-system" from "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
: b; q* B. s# D& ?; Q9 Q% Qensure CRDs are installed first
9 U& k0 J8 f, ^4 t( `* k
& B6 f! x5 c7 @, k( F2 E# d/ C" N* N' U2 q

calico.yaml

184.76 KB, 下载次数: 0

 楼主| 发表于 2024-9-18 15:14:52 | 显示全部楼层
删除pod ,kubernetes-dashboard5 i' H0 A" o$ [2 r( ]! l- S9 y( N
kubectl describe pod/kubernetes-dashboard-746fbfd67c-8xbmk --namespace=kubernetes-dashboard, {5 C% c! W! g2 c

# [; F' J9 s/ P" W6 G! H$ G. }1 j  }% j: X4 d
kube-flannel           kube-flannel-ds-l68ft                        0/1     CrashLoopBackOff& Z8 y- M/ V: ^# h, q& b3 y! x
: R+ J" p) q8 N& @: ]# t
解决方法:( A6 F4 S& p1 m6 c7 _; I( F! s
- A6 u' `+ F' D1 B; p  F9 T# I9 F7 ^
检查 Pod 的事件或日志:使用 kubectl describe pod <pod-name> 查看 Pod 的事件和日志,找到启动失败的具体原因。
% K3 a- X2 d! u5 X# Y7 D1 c+ A% U! s4 v" W% X

3 d: k  X$ ?5 P  x) D kubectl describe pod kube-flannel-ds-l68ft --namespace=kube-flannel
5 W3 s" r' u5 t9 l  s1 rName:                 kube-flannel-ds-l68ft* v5 D1 h5 D. s: A, @( ^: `. U
Namespace:            kube-flannel' {" N9 k( `, _- @* ~" R2 g, H3 i
Priority:             2000001000: B; b  Y9 H8 K5 }0 y
Priority Class Name:  system-node-critical
* G) |% ?- H) y/ OService Account:      flannel: D% n+ u$ q" K  K4 Y. `
Node:                 kubernetes-node1/172.24.110.183
; f. ~+ F' \+ z, j% M8 W% f' ?Start Time:           Tue, 17 Sep 2024 16:33:08 +0800
; k& |; X* d/ z+ `9 WLabels:               app=flannel" I7 @" }  }# Z, ?- G: g- l  n& |  v
                      controller-revision-hash=c46b99f7f
- H0 U$ p. S& s" F4 m+ o                      k8s-app=flannel0 B1 R) M) ]0 x$ P( s
                      pod-template-generation=2
* M' L- o7 h: k5 O                      tier=node; D% e) `! i1 A$ I" @/ `$ T
Annotations:          <none>: b6 P; r9 s- Y
Status:               Running9 [; w3 m3 }- t% M) E& D
IP:                   172.24.110.183
- X8 \3 F( z/ D( W; P
4 t6 {( x  G- A  H
) o( y" }3 N( y: E$ z9 ]" j0 c, s9 X* F! w
修正配置或启动命令:根据日志信息修正容器的配置文件或启动命令。
' l' L% {$ ]/ W! _4 J1 p& N5 e7 p6 T5 k
检查资源限制:使用 kubectl top pod <pod-name> 检查 Pod 是否有足够的资源运行。0 r$ D5 h. H/ C4 Z1 {& G$ K" O) e& |5 E) x
" K/ p/ O: a9 e
$ [+ e: M" J; ^' `. I, T

  h2 n% @! g" |1 H1 x. `- X调整权限:确保容器以正确的用户身份运行,并且有适当的文件权限。
  }* `/ Y4 c& r8 {* C3 }
, g$ E2 p2 P; S: C4 G: O确认依赖服务:确保所有依赖的服务都已启动并运行正常。
8 u6 b/ w& B( ~. \6 [3 |  Y  s/ }$ |0 K; e: X2 n
重新拉取镜像:使用 kubectl get pod -o yaml 查看 Pod 定义,确认镜像名和标签正确,然后使用 kubectl delete pod <pod-name> 强制 Pod 重新拉取镜像。) s9 L6 Y: I9 \" z& g4 B& E/ u$ ^7 T

. F+ p5 w; E/ K; {6 `0 L
4 P" Y! T6 [" j7 _1 h% x8 Kkubectl get pod -o yaml; r1 N9 _; m7 f  v6 f+ J* K
apiVersion: v1
& n! n$ F! Q- yitems: []* S% v. Q3 V* E0 T* y/ S6 T( h
kind: List6 m6 s% O  B4 H# d8 _
metadata:
7 F* {* ~% G, T  resourceVersion: ""
' N5 {# d+ u( l0 M7 t) b: q: L! w& `5 A3 f$ f+ u: T1 E0 j
0 j- |) G0 Z; ^( F. K! u
5 H1 Z& [$ y" e' {7 W/ w$ W3 j
调整安全策略:根据日志信息调整相应的安全策略,以允许所需的操作。5 f1 g, s5 U, K1 a: `* {3 F( k
- \+ {2 ^, @8 y% ?
在解决问题时,可能需要多次尝试,并且每次修改后都需要重新创建 Pod 来使更改生效。9 w! v9 O# v2 c4 N+ S# d

: K; q' U/ n, N
) n" F4 S( d. e' {5 F) u
; e5 c+ p& b9 e$ t3 q- j
' ?6 }, g7 L5 X9 \
 楼主| 发表于 2024-9-18 16:18:16 | 显示全部楼层
dashboard上面装的有问题,下面重新安装:+ n! T. k' f" T4 r9 G6 \

8 @2 E8 d9 l& }- U$ r+ t* O  q8 p, W/ D: ~& M" Z9 }' \. W
[root@kubernetes-master ~]# wget https://raw.githubusercontent.co ... oy/recommended.yaml
+ s4 h9 q! Q' V! h5 v--2024-09-18 16:04:03--  https://raw.githubusercontent.co ... oy/recommended.yaml
9 ~; ^7 V3 `' J; j- G% iResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.109.133, ...
/ O1 k+ M, ^7 v& \: GConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
- s4 a/ w4 l8 g- Y1 kHTTP request sent, awaiting response... 200 OK
% U# S6 |; G) XLength: 7621 (7.4K) [text/plain]
9 e6 ~9 [; ]$ w+ z* \  {& fSaving to: ‘recommended.yaml’4 H/ f6 s9 ^5 O- n+ l/ l
& a. v% C7 ^* d- x
100%[======================================================================================================================================================================================>] 7,621       --.-K/s   in 0.001s  $ i: e+ a! ]$ @! T$ b- p7 ?. T
& @5 {  C: A& }7 T0 x! {9 t
2024-09-18 16:04:04 (7.53 MB/s) - ‘recommended.yaml’ saved [7621/7621]4 ^4 Z0 e( C4 u0 f6 V
[url=]recommended.yaml[/url]
/ V9 }+ l5 C- l; v% j+ `- n2 s% p[root@kubernetes-master ~]# ls* I% W0 g+ _( J6 i  V9 R
calico.yaml  cri-dockerd-0.3.2-3.el7.x86_64.rpm  kube-flannel.yml  qemu-guest-agent-1.5.3-ksyun.x86_64.rpm  recommended.yaml  recommended.yaml.bal  sudo-1.9.5-3.el7.x86_64.rpm
2 _, @1 P4 O; Q9 s; t1 l( s! l[root@kubernetes-master ~]# vim recommended.yaml0 L* O7 M# V% t, W
[root@kubernetes-master ~]# vim recommended.yaml
, N0 M5 c( R8 }/ V
4 u/ k5 z+ V$ ?( h! L& m! l
* ]; K& A3 D2 rspec:
3 @* M$ p( A5 u, }  B( I type: NodePort      ##添加的8 X; b/ [5 S5 F$ |2 c6 ~3 f
  ports:' Y; Q/ t0 U' I1 c! p( D$ W1 N% i, d
    - port: 4434 O% F7 W2 `4 L
      targetPort: 8443% P3 ?7 g# E9 z( ]) p* e
      nodePort: 32333      ###添加的- ]) D) K; B2 H2 m
  selector:, p1 Q. s9 [! i6 L6 L" C5 H
    k8s-app: kubernetes-dashboard# }' P2 k2 Y$ {
) ~9 W$ m/ y. C- a5 f2 d  B
* _9 y  x- {* m6 S; \& D% ?
) U- x$ s- S0 F6 P
      containers:
3 T: U9 |6 ^1 ?/ E+ ^        - name: kubernetes-dashboard
4 T' v( [" G# O' ~0 \/ \! U! [          image: kubernetesui/dashboard:v2.5.18 i* p* d4 G! ?- p# y. y% s
          imagePullPolicy: Never    ##Always 改成Never; C$ k, y$ j; `* Q
          ports:
+ U- A/ I1 n$ A+ @$ B/ B            - containerPort: 8443
% Y8 U) W+ K# Q' s2 _              protocol: TCP
# Q, D, ~* n% }2 d1 u. o1 l
) h3 p% Z) K; z+ F6 ]! {: G' |: z9 B! G/ Y+ K" c

) x& U& m6 O6 w( M
, V+ s1 \) Z% A& W5 D8 ~5 k! R( k6 _$ O- L. H
#创建danshboard5 w0 o& ^& M+ X. T! R
kubectl create -f recommended.yaml
' G$ K3 \+ b1 O
& e; R5 K( J5 v9 v9 E
, I( u! c6 C9 ~: Y, E) x5 E2 J! t
[root@kubernetes-master ~]# kubectl create -f recommended.yaml
+ C1 |. A9 z5 N# ?7 i6 K5 H: \namespace/kubernetes-dashboard created  L. r/ y' ~2 N9 ?6 S1 d  g
serviceaccount/kubernetes-dashboard created
. o  p2 j5 w. @0 bservice/kubernetes-dashboard created
! C9 ?  K0 j. }% I/ e! _6 Ssecret/kubernetes-dashboard-certs created
% K; c. h' e; @9 Rsecret/kubernetes-dashboard-csrf created2 T" s1 v4 I4 }7 n8 r. X6 e
secret/kubernetes-dashboard-key-holder created
8 |. w- Z* Q9 {$ Lconfigmap/kubernetes-dashboard-settings created
3 M# J. M# u4 b9 _' c2 k5 arole.rbac.authorization.k8s.io/kubernetes-dashboard created
; ]# E) _4 W  F* b* a- Aclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created. W, L+ @% ^: G6 L1 n
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created" ]0 |5 Z8 c) O9 n
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
- d3 e0 w# T& d) i' jdeployment.apps/kubernetes-dashboard created+ a% Z+ O: j- }5 c5 s9 L
service/dashboard-metrics-scraper created
! n. `3 g6 i4 \( z1 w: hdeployment.apps/dashboard-metrics-scraper created
  H9 u* h2 R, v1 y7 ^- G, H* t+ @. o
* H1 |- d& Y$ [
( s+ T! R4 g; E. i6 W' v$ T
#查看所有pod- C8 Q' f2 s6 H5 n
kubectl get pods --all-namespaces$ Z/ j" P3 W/ Z0 o. G5 x
- U  k0 ^. b' [$ `9 w+ l! r
6 t7 t- P- R, z# L. ?% e
[root@kubernetes-master ~]# kubectl get  pods --namespace kubernetes-dashboard   L" N( e8 G9 T/ I* o- P9 S! d- c
NAME                                         READY   STATUS             RESTARTS   AGE; s1 x" w- P/ _# x+ \3 r0 {' }
dashboard-metrics-scraper-6fdb9d6cdd-nhs4j   0/1     ErrImagePull       0          95s
; X" [* U% L. |# l! t, t2 {, `kubernetes-dashboard-79d57f5458-6kmlb        0/1     ImagePullBackOff   0          95s0 @% O' @8 P8 l+ y

% u1 |& t( S6 q6 ^; j# e0 s. Y* a0 P! d9 ?
查看报错信息:% u4 \# Y( Y0 c2 x* y" l& p5 T
#kubectl describe pod kubernetes-dashboard-79d57f5458-6kmlb --namespace kubernetes-dashboard
/ n+ K2 z% {" `& S  P* R0 e$ b. b/ V
Warning  Failed          81s (x2 over 2m10s)  kubelet            Failed to pull image "kubernetesui/dashboard:v2.5.1": Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)6 _  n& C# Z$ o4 u1 v
  Normal   Pulling         53s (x3 over 2m41s)  kubelet            Pulling image "kubernetesui/dashboard:v2.5.1"
8 ?( d$ D3 j% s; S5 ~5 u. W8 k$ m  Warning  Failed          22s (x3 over 2m10s)  kubelet            Error: ErrImagePull
* `. z0 Y; h" ~9 P: x: s' i  Warning  Failed          22s                  kubelet            Failed to pull image "kubernetesui/dashboard:v2.5.1": Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp 199.16.156.71:443: i/o timeout' w, X! s+ d# R4 z: N$ e* K
1 a- Y# _- w$ h7 l8 d: \% a
1 A6 G+ z% t; @- }3 l' }' M% W
#根据上述信息我们来到节点尝试拉取镜像
; x/ x! ^8 Y! G; |/ W/ ]( Q) U+ F4 B0 C" s

3 B* l! W/ H( T% X) Z% w. E
- n  ?' y4 I! t& zkubectl apply  -f recommended.yaml 8 i% S! n. Z8 |
# ?6 |4 V6 M' u# u: F- E. {+ ?

8 Q! i, g. R( k, N/ X% n- D: s* T4 q0 T) y2 k( U2 [& n6 E

" ~4 p$ q4 e: s8 T7 r0 ^3 A: h, x- P2 A/ f. Q

% V7 C8 k* S6 t, g

recommended.yaml

7.48 KB, 下载次数: 0

2.5.1

recommended.yaml

7.48 KB, 下载次数: 0

2.7.0版本

 楼主| 发表于 2024-9-24 16:26:34 | 显示全部楼层
Step 1 : Configure Kernel Modules and Networking( ?) D# k+ c* q: Z) f% V+ L$ N+ {
Before setting up Kubernetes, certain kernel modules and sysctl parameters need to be configured to ensure proper networking between containers.
) A: D8 `# Z& k- y. p5 P
9 @- g; a0 n$ v2 eFirst, we load the necessary kernel modules (overlay and br_netfilter) that Kubernetes relies on.7 q2 m8 G3 l" t5 H# A% m0 }
9 F+ W$ F) y! y4 z6 V
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf  c: L6 \. {5 l7 `
overlay
, |5 d4 L9 A" r# C3 Obr_netfilter
2 `: n. x$ v& {1 TEOF7 L" [& R/ u' `% I( ]3 x  E4 B
7 @6 E: x6 C. d' @0 S! }% |" V$ c& L( ~
sudo modprobe overlay/ H8 @6 y1 o) t8 `( F6 s% i6 d( H
sudo modprobe br_netfilter
2 H: L8 I% n4 P  k6 m- K) U- p! G( \/ a! Q! J
# sysctl params required by setup, params persist across reboots
6 R! t3 H4 a; r  Ncat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
) D; U; s7 U& E8 E- Cnet.bridge.bridge-nf-call-iptables  = 1
, x7 B" d& H  M: X8 s- c# Qnet.bridge.bridge-nf-call-ip6tables = 16 ]7 E, `7 X2 n9 F2 Q
net.ipv4.ip_forward                 = 1
' B, e. z9 ?% bEOF+ d+ I* |* c, J- y  ?, E

- R- o& e+ Q" ~" B. [# @1 W2 e# Apply sysctl params without reboot( O' m1 l( d" O" O! i
sudo sysctl --system        $ x0 g; S1 z+ G: {5 ~+ u. b" v7 ]
$ K0 j! D( H8 x( k# n' G# r

$ h- u+ t' i& ^" [9 Z% P/ UStep 2: Disable swap on all the Nodes" R: {; a1 n; T/ O' M
sudo swapoff -a
$ v8 e/ ~: q; R3 \% Z) W) j(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true        
5 A, P! ~9 s" j3 z& ~, u. O& E# QStep 3: Install Containerd Runtime On All The Nodes
" i+ C  \- \0 f/ ~8 n# [; Z/ Xsudo apt-get update && sudo apt-get install -y containerd        ! R2 F2 }1 p0 X2 B# R9 t0 E: f
Step 4: Configure Containerd
; C2 o8 s3 O" g- j5 l! o! y8 C. dsudo mkdir -p /etc/containerd
! }9 \. Q, U) T9 msudo containerd config default | sudo tee /etc/containerd/config.toml6 l2 M5 p3 V! V
sudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml9 X/ Z( |! M4 \3 o% ]- K
sudo systemctl restart containerd        
0 `3 k% g9 `$ ^  ^* ?8 FStep 5: Install Kubeadm & Kubelet & Kubectl on all Nodes
' H$ C& Y/ w/ lKUBERNETES_VERSION=1.300 M9 \9 u( E7 J9 t( f, n

2 M0 y. A( E6 O. Z$ ]+ e) fsudo mkdir -p /etc/apt/keyrings) d* P# A5 k  U7 G& r8 @3 v
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
. O1 l7 p  i: B7 z  q, oecho "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
* l+ e+ l5 D; q9 ^0 |/ T  W' [' Csudo apt update && sudo apt-get install -y kubelet=1.30.0-1.1 kubectl=1.30.0-1.1 kubeadm=1.30.0-1.1
, b& c* j+ B4 m+ a3 A
4 D* c; R! o' GStep 6: Initialize Cluster8 z8 Y! D8 Z- _# ]
NODENAME=$(hostname -s): z* A* @! f( Y7 H0 v
POD_CIDR="10.30.0.0/16"  Y6 g! |/ R- [8 S- e9 G
kubeadm init   --pod-network-cidr=$POD_CIDR --node-name $NODENAME         3 ~2 @- U' F; h3 t- z0 r! H5 u

9 E1 [  w% ^8 w1 q5 J9 [! d0 _5 b: V' A* b. [
6 z3 U# w4 F6 Z' [

( `. g+ D$ B- j/ v  O, v- d0 i! VAn Open Letter to the SUSE Leadership Regarding…" e( }" }+ \4 [! v7 D# c" O
John Carr  3 年前( `4 q* H( N- }$ [& i
Step 6: Copy Join command in workers/ p" n0 y1 ^) U/ X- F
In this step, copy the kubeadm join command and follow Steps 1 to 5 on each of the other nodes.  {, {) [, Q, G) D5 a+ N6 g7 i

8 S0 h% o* M5 O6 L- e' {# k+ z9 }+ d! g; Q
step 7: Install CNI Plugin
+ M& z" ~# |; F( A3 B/ |, t# Z, CFinally install a CNI. for your cluster in this example we are going to install calico for our cluster
' q0 g2 F4 k( ^- s% G$ ^/ j4 T1 N" h" b0 t0 s
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml        
- L& }$ I! O1 l) k; e" Q
( }  C: H. U6 f, l6 G/ |- ^& S2 \7 M5 z& G0 Q; B0 G
Step 8 : Final result
' E% j/ G3 [- V. n" u9 Enow you should be able to run kubectl get nodes and all nodes and kube-system pods should be running: u  L2 f0 I9 I1 X. \

: ~5 s: e6 S& I, n: K# Q1 b3 z1 `3 Q# o) T, E# f$ ]% T$ ]. T
- ^: r/ S3 d- B8 Q7 r. _  l0 y
kubectl get nodes& @. n3 ]% b( `% W; t+ E. T8 j* f
kubectl get pods -A        - t& R  X$ O/ |. X2 n

) F' T  R' T; k7 f
 楼主| 发表于 2024-12-29 21:16:59 | 显示全部楼层
镜像最近有些变化了:
& S! K: I* E6 Y2 \  H: X8 y+ y$ h( O
I1229 21:16:13.799696    2756 version.go:256] remote version is much newer: v1.32.0; falling back to: stable-1.28
2 W6 |1 a/ w* H# Pregistry.k8s.io/kube-apiserver:v1.28.151 _- k4 ~2 I" O/ k# G) r
registry.k8s.io/kube-controller-manager:v1.28.15
1 U6 z& v9 B) \  N5 h- Y/ U$ q- vregistry.k8s.io/kube-scheduler:v1.28.15( H4 |0 F) d8 f0 Y$ X- f8 w5 r. C
registry.k8s.io/kube-proxy:v1.28.15' K2 [( C6 \! z1 k% i
registry.k8s.io/pause:3.9
# Y# _' J; _) d( f- m3 Oregistry.k8s.io/etcd:3.5.9-0/ t" f/ Y; Z$ z! t9 L! S
registry.k8s.io/coredns/coredns:v1.10.1
) h9 X) _% F8 s* t- S/ W. m; F
 楼主| 发表于 2025-1-1 09:26:26 | 显示全部楼层

! J3 c) a5 _5 N; [+ D' i1 P" V在使用kubernetes 32版本1 p  V' _! `  _; _
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.8.190 --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version=v1.32.0 --service-cidr=172.29.16.0/23 --pod-network-cidr=172.22.16.0/23 --cri-socket=unix:///var/run/cri-dockerd.sock --v=5 9 P9 K5 `+ J9 L$ v( s) U! c1 V
I0101 09:17:08.895164    2766 kubelet.go:195] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
* L5 m4 C# C* [8 J[init] Using Kubernetes version: v1.32.0% V* N3 S# a, W- h; C0 H
[preflight] Running pre-flight checks
! {* P  L" r( M2 d, k) I: XI0101 09:17:08.912311    2766 checks.go:561] validating Kubernetes and kubeadm version
  [' G: Q+ O4 S. p7 o1 D        [WARNING KubernetesVersion]: Kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. Kubernetes version: 1.32.0. Kubeadm version: 1.31.x
: U2 P$ I9 g3 s6 ]( H7 d% f; cI0101 09:17:08.912412    2766 checks.go:166] validating if the firewall is enabled and active2 u" T, k) J! D( z1 M" j2 {
I0101 09:17:08.925133    2766 checks.go:201] validating availability of port 6443
$ V8 ?. m+ W" QI0101 09:17:08.925553    2766 checks.go:201] validating availability of port 10259
, u) S: T; J0 n) ~7 D; S: t1 EI0101 09:17:08.925679    2766 checks.go:201] validating availability of port 10257# u& {+ @% C- S. F4 j
I0101 09:17:08.925766    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml$ C3 A( E. y6 @( g
I0101 09:17:08.925857    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml8 _  h. k" ~$ X! t
I0101 09:17:08.925909    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml- c  d6 G' T- y8 G$ i. B0 \) X
I0101 09:17:08.925954    2766 checks.go:278] validating the existence of file /etc/kubernetes/manifests/etcd.yaml7 W' _  [- ^- a  o1 `( t
I0101 09:17:08.925979    2766 checks.go:428] validating if the connectivity type is via proxy or direct  |+ B$ Z% d* b1 k( U- k6 W# p- }
I0101 09:17:08.926074    2766 checks.go:467] validating http connectivity to first IP address in the CIDR
! p. ~( E6 _9 b6 l( r0 y' ZI0101 09:17:08.926142    2766 checks.go:467] validating http connectivity to first IP address in the CIDR1 {# S- o" p% X1 W: e" V: w
I0101 09:17:08.926178    2766 checks.go:102] validating the container runtime7 z; Q) B) m' ^1 O5 H
I0101 09:17:08.927016    2766 checks.go:637] validating whether swap is enabled or not: Z  O! A2 m5 \& M0 n# u& V
I0101 09:17:08.927191    2766 checks.go:368] validating the presence of executable crictl: v6 P( b" o3 s% j6 F6 @+ O' ?
I0101 09:17:08.927293    2766 checks.go:368] validating the presence of executable conntrack2 m& @  _6 X6 }# u( c5 O6 W
I0101 09:17:08.927331    2766 checks.go:368] validating the presence of executable ip: c1 g5 c8 I* C1 s8 D- _0 n4 x% c8 b
I0101 09:17:08.927369    2766 checks.go:368] validating the presence of executable iptables+ N: h$ m' ^; N6 p/ @7 d( Y
I0101 09:17:08.927405    2766 checks.go:368] validating the presence of executable mount
' ]& [" f$ h% e2 W) D8 ]0 A" gI0101 09:17:08.927442    2766 checks.go:368] validating the presence of executable nsenter( j: T0 S8 g& p, ~: _, d
I0101 09:17:08.927479    2766 checks.go:368] validating the presence of executable ethtool9 l5 R5 P" e, _
I0101 09:17:08.927512    2766 checks.go:368] validating the presence of executable tc3 r1 ~3 x  s4 e9 ?' i* E
I0101 09:17:08.927565    2766 checks.go:368] validating the presence of executable touch
9 u# j& F; c& A5 `' }, KI0101 09:17:08.927605    2766 checks.go:514] running all checks
. _$ M. Z# e  N$ }- ]- w. y; }1 rI0101 09:17:08.941390    2766 checks.go:399] checking whether the given node name is valid and reachable using net.LookupHost6 L) b- n6 g8 [, z( i% k
I0101 09:17:08.941828    2766 checks.go:603] validating kubelet version+ M( ~, B/ t# _; D* E
I0101 09:17:09.013778    2766 checks.go:128] validating if the "kubelet" service is enabled and active: e6 l( e/ h- o  p4 w
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
4 \* v4 F! r0 p, j' c5 {I0101 09:17:09.027024    2766 checks.go:201] validating availability of port 10250
7 v$ r7 t2 _# U' C& v0 {* iI0101 09:17:09.027156    2766 checks.go:327] validating the contents of file /proc/sys/net/ipv4/ip_forward7 |! {2 Q0 J4 Y- i: `" ^* i
I0101 09:17:09.027330    2766 checks.go:201] validating availability of port 2379& ^& L( T, a2 j7 q) P/ `1 o' l
I0101 09:17:09.027432    2766 checks.go:201] validating availability of port 2380
" s) v  [; \1 {  B, n  k+ x+ U. tI0101 09:17:09.027550    2766 checks.go:241] validating the existence and emptiness of directory /var/lib/etcd6 f2 u% Q6 f) O: @5 I! z
[preflight] Pulling images required for setting up a Kubernetes cluster! O- s3 T6 _6 `# h5 }# H9 o+ t% x
[preflight] This might take a minute or two, depending on the speed of your internet connection+ T7 C* y, `4 u9 B# L
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'& z% r2 B5 B+ L1 Y, r
I0101 09:17:09.030027    2766 images.go:80] WARNING: could not find officially supported version of etcd for Kubernetes v1.32.0, falling back to the nearest etcd version (3.5.15-0)
( Q; N- H" [8 i  l# {7 VI0101 09:17:09.030090    2766 checks.go:832] using image pull policy: IfNotPresent1 v* L) }; j$ O/ G6 C7 c
W0101 09:17:09.031052    2766 checks.go:846] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image.
* ^3 h6 r3 i. d2 Z6 eI0101 09:17:09.033771    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-apiserver:v1.32.0
5 [% k: v5 l0 w' vI0101 09:17:16.690625    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.32.0+ N& x7 L- \  Q2 p3 I/ R# [
I0101 09:17:23.575148    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-scheduler:v1.32.0
0 s3 l4 ]/ P) O- @* n7 ZI0101 09:17:29.427958    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/kube-proxy:v1.32.07 E% ^! ?( k0 b. G
I0101 09:17:37.054594    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/coredns:v1.11.33 _6 b5 f7 E  h% j( L' H
I0101 09:17:43.574636    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/pause:3.100 s4 |7 P( c" y  @( o4 o% r1 t
I0101 09:17:44.929429    2766 checks.go:871] pulling: registry.aliyuncs.com/google_containers/etcd:3.5.15-0
5 B9 h# x4 R; g* K; H# A[certs] Using certificateDir folder "/etc/kubernetes/pki"! I, v. y1 K  e8 o
I0101 09:17:58.489483    2766 certs.go:112] creating a new certificate authority for ca
9 M+ _8 b) S. ?[certs] Generating "ca" certificate and key
0 n: Q' d- _$ w" ]) r# Q/ UI0101 09:17:59.936877    2766 certs.go:473] validating certificate period for ca certificate
. @6 J- k) l( \4 M3 S& G  L- _1 O[certs] Generating "apiserver" certificate and key
) r, i% u* H' w% @$ A[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.29.16.1 192.168.8.190]6 i- x" t: @  c. o- ^
[certs] Generating "apiserver-kubelet-client" certificate and key
6 Y9 M  g, v! n' @! w! O& f# TI0101 09:18:01.377059    2766 certs.go:112] creating a new certificate authority for front-proxy-ca9 A: b9 B2 o# r8 ]% G* g5 d
[certs] Generating "front-proxy-ca" certificate and key
0 n& b6 R& b6 e) z0 u: vI0101 09:18:03.218799    2766 certs.go:473] validating certificate period for front-proxy-ca certificate
; s3 P; `7 {, Q4 N1 Q5 z[certs] Generating "front-proxy-client" certificate and key% G: T) I! @2 J$ f! c
I0101 09:18:04.332875    2766 certs.go:112] creating a new certificate authority for etcd-ca
  G; M! O* K4 A$ X, w& U* Z[certs] Generating "etcd/ca" certificate and key& h$ @5 O4 r; ~
I0101 09:18:06.977732    2766 certs.go:473] validating certificate period for etcd/ca certificate& t3 e! s( v4 R
[certs] Generating "etcd/server" certificate and key7 I# {. l! w! F5 V
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]
2 {/ C; b% b" D' w/ ^[certs] Generating "etcd/peer" certificate and key
# o8 B8 f' Z* b4 p! S/ O[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]3 ^9 i! I# V( A! y, [, J. M0 L
[certs] Generating "etcd/healthcheck-client" certificate and key
& w+ i3 r) R$ [4 q6 q- e[certs] Generating "apiserver-etcd-client" certificate and key
0 p. ]9 `% V# g; G/ z4 m  W& DI0101 09:18:10.455557    2766 certs.go:78] creating new public/private key files for signing service account users
7 [3 _5 c" ^1 E! D) |[certs] Generating "sa" key and public key4 i+ i( I# H+ N2 e" H
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
$ ?4 |$ `- J, k5 AI0101 09:18:10.855930    2766 kubeconfig.go:111] creating kubeconfig file for admin.conf1 R+ @! }$ o% Z: P
[kubeconfig] Writing "admin.conf" kubeconfig file6 o4 ^% g! A5 _0 r1 F3 V! h6 d4 {
I0101 09:18:11.347740    2766 kubeconfig.go:111] creating kubeconfig file for super-admin.conf1 N6 q* K) D* K- q
[kubeconfig] Writing "super-admin.conf" kubeconfig file- y* k$ x. N4 r; |
I0101 09:18:11.688152    2766 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
1 H( B3 }- Z# U3 n" t[kubeconfig] Writing "kubelet.conf" kubeconfig file. c, s' ?3 m1 V9 v$ t( o
I0101 09:18:12.190293    2766 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
: D4 R8 g/ C+ `9 {% t- S[kubeconfig] Writing "controller-manager.conf" kubeconfig file
, m% d+ Z" V6 d, rI0101 09:18:13.161357    2766 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
9 o. O+ c) F) Q( i- H[kubeconfig] Writing "scheduler.conf" kubeconfig file" `, d3 Y# [0 F- k# P3 m( s
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
& s+ J4 s/ s7 wI0101 09:18:13.708236    2766 images.go:80] WARNING: could not find officially supported version of etcd for Kubernetes v1.32.0, falling back to the nearest etcd version (3.5.15-0)
: b: I5 L$ _% A2 A$ o% M  i! n- @I0101 09:18:13.713834    2766 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
' y$ N! j" u7 A1 Z: i) Y[control-plane] Using manifest folder "/etc/kubernetes/manifests"
+ ^; F; ]+ g# G! D4 ]# L[control-plane] Creating static Pod manifest for "kube-apiserver": R3 _  R& f* {/ I% ~
I0101 09:18:13.714027    2766 manifests.go:103] [control-plane] getting StaticPodSpecs
9 `3 x, o+ i4 v/ bI0101 09:18:13.714500    2766 certs.go:473] validating certificate period for CA certificate* a0 G7 D7 p0 _6 u$ k9 g: j  ]
I0101 09:18:13.714687    2766 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver". {6 q- Q( N" b. O' F* R
I0101 09:18:13.714767    2766 manifests.go:129] [control-plane] adding volume "etc-pki-ca-trust" for component "kube-apiserver", t: \0 j. L9 `: w* r) ]( t7 }- V+ v
I0101 09:18:13.714837    2766 manifests.go:129] [control-plane] adding volume "etc-pki-tls-certs" for component "kube-apiserver"
0 Y- @0 x& H9 `/ T4 PI0101 09:18:13.714857    2766 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
# e! x- y# g6 |1 n0 VI0101 09:18:13.716746    2766 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
& u* |6 E- f/ ]8 y$ l: {[control-plane] Creating static Pod manifest for "kube-controller-manager": G# _- _3 k& W& H
I0101 09:18:13.716863    2766 manifests.go:103] [control-plane] getting StaticPodSpecs- i3 U+ L+ F4 |2 {  z6 _3 e8 C* i% H
I0101 09:18:13.717331    2766 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"1 }# t0 L+ v1 _8 g4 g  z
I0101 09:18:13.717415    2766 manifests.go:129] [control-plane] adding volume "etc-pki-ca-trust" for component "kube-controller-manager"
, H" i  d2 H  Y8 d' DI0101 09:18:13.717453    2766 manifests.go:129] [control-plane] adding volume "etc-pki-tls-certs" for component "kube-controller-manager"
  `! J0 z" D4 A, NI0101 09:18:13.717498    2766 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager": z% E5 i0 D) h% k
I0101 09:18:13.717517    2766 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"2 t+ q" N; @0 ?- q+ ~! t7 h5 Z
I0101 09:18:13.717561    2766 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"6 _: _+ e7 }, D4 |1 m( q
I0101 09:18:13.719110    2766 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"2 D6 ?. H( P& ]/ r2 u9 O
[control-plane] Creating static Pod manifest for "kube-scheduler"/ a8 K* c0 w7 s( h1 `7 A4 p
I0101 09:18:13.719208    2766 manifests.go:103] [control-plane] getting StaticPodSpecs
) W3 I3 x- S( y* d- M  }I0101 09:18:13.719600    2766 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"1 @$ o; ~2 B7 v! T$ S7 s8 x
I0101 09:18:13.720666    2766 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
0 d& q# ^' p1 R6 e8 E) G: M% Y4 WI0101 09:18:13.720760    2766 kubelet.go:68] Stopping the kubelet/ r) x( V) Y$ _/ h& A
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env". D4 L: V' P, U9 Q2 N
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
7 e- p  |: b, C' n! V* p[kubelet-start] Starting the kubelet
4 D6 E! m+ `- h: ~5 R4 v) C[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"# T3 J& N# |$ R
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s' s% a) e- L& l% k
[kubelet-check] The kubelet is healthy after 1.517289565s
4 Z+ k  O/ V  i$ E, l$ c: _[api-check] Waiting for a healthy API server. This can take up to 4m0s. d/ w# {  B; U3 j
[api-check] The API server is healthy after 17.003014698s
1 E, D: h3 z' V# r. L" QI0101 09:18:32.522198    2766 kubeconfig.go:665] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists
: ]* y( ^0 w  Z, L1 A. ~: X7 KI0101 09:18:32.524875    2766 kubeconfig.go:738] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf
- o; K0 D8 a8 a3 p1 S" z5 wI0101 09:18:32.548427    2766 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap0 t1 G8 k- N7 o1 `7 @" U) M. |
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace; W3 h1 W+ B) t1 h) V
I0101 09:18:32.566210    2766 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
' F% @  h7 y9 M$ Z  d7 z8 W[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster" l- a9 h" E. \
I0101 09:18:32.584736    2766 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
4 F. a  v  \- X- L6 i  S$ D1 P4 RI0101 09:18:32.584775    2766 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/cri-dockerd.sock" to the Node API object "k8s-master" as an annotation8 G/ `8 Z$ t" k  w/ N0 u
[upload-certs] Skipping phase. Please see --upload-certs
6 F) g# e2 g% k[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
4 _0 K& p  p( u$ s' p$ C[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
' B6 _& B/ c* c1 ^9 }# T[bootstrap-token] Using token: mkl5ok.ttqgpoybxarwwum80 P7 Q; P1 a  H" d; V0 f
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
6 x! z9 r8 x: h5 a2 g3 E& x[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
* S  [& V( c5 x( D7 O6 H[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials( ?& ?$ i: L" a1 N1 d" i: h2 K! M" g
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
0 O9 G: C& y: k( M; z[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster1 ^1 ?0 B* u8 D( Z' T1 L6 x
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
% a) K" O% W6 U. d6 \: JI0101 09:18:32.662880    2766 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig: r( X; z4 F% i; A& [
I0101 09:18:32.663948    2766 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig* K' G' b; O# P4 Q
I0101 09:18:32.664531    2766 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
8 T4 Z4 x# j8 M$ {& Y( bI0101 09:18:32.670237    2766 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
" Q$ c* |( e/ g8 ?- _I0101 09:18:32.722880    2766 request.go:632] Waited for 52.518759ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... c/roles?timeout=10s
+ w6 u6 T+ T' F+ q0 t1 P% {* cI0101 09:18:32.922885    2766 request.go:632] Waited for 193.402714ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... indings?timeout=10s
' t* E! v) x2 ?! v; W- `3 j: XI0101 09:18:32.928726    2766 kubeletfinalize.go:123] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
7 i) a: y' F" j0 m[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key1 h: n3 S* T& w1 h/ K
I0101 09:18:32.931276    2766 kubeletfinalize.go:177] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation/ Y& @# \; Z7 u5 ]5 ]1 v3 q" G: b
I0101 09:18:33.171057    2766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false5 R! [1 o* X/ A4 `- g. _
I0101 09:18:33.171183    2766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false' w1 b# m- U5 O& U" f( F3 n
I0101 09:18:33.325030    2766 request.go:632] Waited for 103.44617ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... indings?timeout=10s1 ~2 u' J9 h% I, Y* i% C
[addons] Applied essential addon: CoreDNS
7 o. s. c" U2 w" g; ]! y9 dI0101 09:18:33.550063    2766 request.go:632] Waited for 94.465297ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/api/v ... ccounts?timeout=10s
+ V1 \; A8 J9 p8 T$ k% FI0101 09:18:33.723795    2766 request.go:632] Waited for 151.789723ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... m/roles?timeout=10s
# e8 U# Q' G" ~) }* tI0101 09:18:33.922760    2766 request.go:632] Waited for 187.41919ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.8.190:6443/apis/ ... indings?timeout=10s1 l7 A* ?1 }4 L& |" j7 E: S
[addons] Applied essential addon: kube-proxy
1 G$ e8 U" b) U; }" Q. T1 R! b: _) k, l3 x
Your Kubernetes control-plane has initialized successfully!( I4 f/ c5 t+ R! z( g

" V3 L- m3 F6 kTo start using your cluster, you need to run the following as a regular user:+ ^- @- F' y" M" y0 r8 B& W
$ w4 U. o' K: t; u
  mkdir -p $HOME/.kube
: y, c; |! _' c" Z  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; R7 f; X0 n4 h& B8 p
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
# q( Q& I/ e* C. z1 b: x; @/ G+ g2 \: j, V& V
Alternatively, if you are the root user, you can run:& O) ]$ E3 v/ J2 }+ n
! A; K- _* K6 l! C% o; `
  export KUBECONFIG=/etc/kubernetes/admin.conf. E- S& G- A2 h& b0 a

% g3 F; w# o/ G4 Q/ Y2 Q1 [6 ]9 YYou should now deploy a pod network to the cluster.
- `" H7 y; i& Y4 f; q1 I6 h5 IRun "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
; |0 d1 m2 v: C0 v6 h" P  https://kubernetes.io/docs/conce ... inistration/addons/3 i* e- B) ?1 C5 {. E: T/ D
9 M0 h% D8 r0 |$ J7 _* g" i
Then you can join any number of worker nodes by running the following on each as root:4 T: q# v1 O: R- U7 v0 t4 \3 B

+ f1 t% m. I0 M0 l  ?- Q: G1 fkubeadm join 192.168.8.190:6443 --token mkl5ok.ttqgpoybxarwwum8 \1 |; E% W1 N+ |; k5 N5 O- [" p# L! c
        --discovery-token-ca-cert-hash sha256:6951f50d3ba9e40f8d175cab4b9711eeb164aa06969b72b2f12a6d881fea666c
 楼主| 发表于 2025-1-1 19:51:59 | 显示全部楼层
创建目录
" Y0 w* S/ D( M# F6 q: R8 f0 h; f根据自身实际情况创建指定路径,此路径用来存放k8s二进制文件以及用到的镜像文件# _  N& {. h5 n- U/ V+ P/ I. }. y
8 T* T% u" o7 ~
mkdir -p /approot1/k8s/{bin,images,pkg,tmp/{ssl,service}}5 K7 N4 S" D9 T) m. e5 ~
关闭防火墙
# C2 Y1 t" T0 U# K$ w3 hfor i in 192.168.91.19 192.168.91.20;do \
: V( c. A& E0 K4 l2 r& lssh $i "systemctl disable firewalld"; \
0 D" ^$ k! `& H0 V( fssh $i "systemctl stop firewalld"; \2 O& d9 [+ e2 y! ^+ e  T
done
8 }3 l4 u, F9 {/ @6 J3 b关闭selinux& |! n" [6 Y0 y
临时关闭
1 A; H- t) H4 Q' J& y- Q3 v: N' C& X% ^* a- O8 S9 w
for i in 192.168.91.19 192.168.91.20;do \* W$ l2 ?1 Q5 q5 Y* D6 G
ssh $i "setenforce 0"; \
4 d; R- c* k* m; H) J4 D" ^$ fdone
& U/ h9 ^0 X: P* B6 {$ u. N永久关闭6 ^+ A# B% R6 j, l

2 b% S6 R3 z# `) _5 r. I* Ffor i in 192.168.91.19 192.168.91.20;do \( z( e% K# T4 v1 C  F
ssh $i "sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config"; \
* l% L/ J2 Y$ P. s* e! Qdone! V/ U6 w7 h7 J9 B: ^  x. q( }/ a0 H
关闭swap  h, T: D  w, M
临时关闭
. u0 ?) i0 o# m( ]4 |  D1 N5 |2 Y# D4 z( v/ k
for i in 192.168.91.19 192.168.91.20;do \% Y1 L' E* ?- g% u
ssh $i "swapoff -a"; \7 E/ v, e  _" G
done3 ?. a  m' Y; K
永久关闭2 t( M8 h# M- r+ \9 s1 l

+ i" w$ v% }; B# W3 Z2 K! B! wfor i in 192.168.91.19 192.168.91.20;do \
6 P- D! x0 ]# O# Q# g* Yssh $i "sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab"; \. ^! u5 l* g4 d5 \/ o
done3 `$ \3 G% A# F" u0 R
开启内核模块
! T' R" @( F% a7 v/ g临时开启
( }/ ^) e5 y1 P7 L7 F8 ?/ s' Q; [7 f% T1 q! x0 x% a
for i in 192.168.91.19 192.168.91.20;do \# L& e8 ?; B/ U; z3 K/ C
ssh $i "modprobe ip_vs"; \0 w( o! o. p' U& m: ]- b) U9 p
ssh $i "modprobe ip_vs_rr"; \3 e+ h: J: O. c+ _
ssh $i "modprobe ip_vs_wrr"; \& K4 J$ Z) |- h; e
ssh $i "modprobe ip_vs_sh"; \. C# U2 U# i  F4 G* A, s9 Y* f
ssh $i "modprobe nf_conntrack"; \
8 J: L8 `2 H+ g; `ssh $i "modprobe nf_conntrack_ipv4"; \* X+ _4 H% ^0 u. _  {2 R
ssh $i "modprobe br_netfilter"; \
9 n: u4 k5 x% M: W; mssh $i "modprobe overlay"; \
! h! v. G: ~1 |+ ^0 `done( j. I7 a- n+ l
永久开启: b" D( _* X! Q0 Z, @

; }& M& R2 k6 C/ x! L' Uvim /approot1/k8s/tmp/service/k8s-modules.conf
+ u' C0 M/ T7 Iip_vs
7 ^. N( s( n+ Z  s( eip_vs_rr
- M4 }6 z4 m  ?6 H9 N0 `' cip_vs_wrr
/ v0 T+ Q+ s& iip_vs_sh
% [$ c; t, ~9 m" a8 Hnf_conntrack" w9 t/ n6 W. X9 Y" u
nf_conntrack_ipv4
2 S- D& V" s+ r  \! ^br_netfilter% A+ s6 w/ L* e0 d- P/ h9 _9 |9 a1 I
overlay$ m4 C1 G* H3 m
分发到所有节点/ y' K! E! C' m1 L
for i in 192.168.91.19 192.168.91.20;do \# C2 i' T5 q" A6 N8 ^
scp /approot1/k8s/tmp/service/k8s-modules.conf $i:/etc/modules-load.d/; \
0 k7 i- g( h* S( {4 B5 `" T: x9 sdone6 D$ E4 x9 y2 L: o2 c( Q/ O
启用systemd自动加载模块服务
/ r& L/ M4 h) W4 I) Xfor i in 192.168.91.19 192.168.91.20;do \! N  Z5 P& v: `
ssh $i "systemctl enable systemd-modules-load"; \4 m- s- |0 r5 S$ X# h( R7 S( T. p% t
ssh $i "systemctl restart systemd-modules-load"; \
4 R4 D! Y, r. K( Qssh $i "systemctl is-active systemd-modules-load"; \8 O# E6 j5 b3 L3 v% d; v7 |+ T
done
% N, D1 `% M( x* ]6 V) H* X6 O/ D返回active表示 自动加载模块服务 启动成功' a4 S# k7 g1 K* r7 e
2 p. B9 d: L2 u* c2 T6 |4 @
配置系统参数/ A+ x2 L8 J- t; R2 w+ e. k% T; `
以下的参数适用于3.x和4.x系列的内核/ u- O8 X  d: {# _. i; x) e% f

$ V4 [  a+ C' svim /approot1/k8s/tmp/service/kubernetes.conf+ e. w; J+ I0 J$ p
建议编辑之前,在 vim 里面先执行 :set paste ,避免复制进去的内容和文档的不一致,比如多了注释,或者语法对齐异常
" P) t/ C. {! `% B+ [3 \: Z9 T7 a3 K3 B, _2 F; k; {
# 开启数据包转发功能(实现vxlan)
8 C, y' f1 _+ D' A$ i* i8 |net.ipv4.ip_forward=10 [" P- p5 A, H$ I
# iptables对bridge的数据进行处理, c  n" }5 G& `
net.bridge.bridge-nf-call-iptables=1
" L- k2 x5 K* V- S. Hnet.bridge.bridge-nf-call-ip6tables=1
/ W& p" a% K0 k: Xnet.bridge.bridge-nf-call-arptables=1. e+ }" {# }4 Z- D
# 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
# |2 I1 j! d; ~# R# l# g* Xnet.ipv4.tcp_tw_recycle=0) O5 k( f2 ?: i& o
# 不允许将TIME-WAIT sockets重新用于新的TCP连接
- |' [1 n; ], M+ Dnet.ipv4.tcp_tw_reuse=01 Z" ]1 g* y* P- G4 I5 t8 s
# socket监听(listen)的backlog上限
, Y, U, t( s) Z% y# }net.core.somaxconn=32768. r6 B# n7 `" O3 Z4 Z/ B" B2 A
# 最大跟踪连接数,默认 nf_conntrack_buckets * 4
, F$ }2 t* q; a) b- mnet.netfilter.nf_conntrack_max=1000000+ i  n3 x( t1 a; t3 k
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
, Q3 v( v' `% u$ [) E) Y6 H) evm.swappiness=0
  j, {5 r& d- T8 }3 K# 计算当前的内存映射文件数。/ Y8 `2 n* B4 {4 ]; m- f; O' e# ^
vm.max_map_count=6553607 ]6 j) E* }5 X9 x2 J
# 内核可分配的最大文件数
; _' [* k& W( v2 w# Tfs.file-max=6553600
& Y4 I. s) o2 v- n3 C( m3 \5 A# 持久连接
- |7 X) c: R) {6 w' N- ?net.ipv4.tcp_keepalive_time=600
% ?  q9 R+ N8 X; ]  t1 r& R& A; Unet.ipv4.tcp_keepalive_intvl=30; `) G) |! H. P. {3 r5 l
net.ipv4.tcp_keepalive_probes=10, Z  {2 i6 |% C$ O6 R, [: \; d
分发到所有节点( A% J( t, d+ y& o+ w
for i in 192.168.91.19 192.168.91.20;do \
  Z1 L! ]1 }( p2 j3 R/ [scp /approot1/k8s/tmp/service/kubernetes.conf $i:/etc/sysctl.d/; \& d, O1 C$ s* O% _. F, z- a
done. `9 t: x' J+ M! e% ]
加载系统参数/ ^* a1 p9 Z. P9 H
for i in 192.168.91.19 192.168.91.20;do \
+ ^/ _2 G: H. w' {7 u2 qssh $i "sysctl -p /etc/sysctl.d/kubernetes.conf"; \
; v! g3 ^7 v$ w* j6 Z5 vdone. q2 Q  l/ T( \: l+ b
清空iptables规则
! u* a* v  t1 ^# vfor i in 192.168.91.19 192.168.91.20;do \; U) s4 b+ }/ F
ssh $i "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat"; \
  f/ ?% i# g% D9 A2 g7 b# P0 ussh $i "iptables -P FORWARD ACCEPT"; \
4 w2 Q! N! V- |* l* P0 y3 }; hdone
' j3 X( t! }$ m1 j配置 PATH 变量( _. ]( S6 Y0 t) S/ V2 @/ l# ?
for i in 192.168.91.19 192.168.91.20;do \
) Z+ m8 d- a) {' A4 i& ^/ `8 y. @ssh $i "echo 'PATH=$PATH:/approot1/k8s/bin' >> $HOME/.bashrc"; \
! C5 h  d+ L" M+ ldone
4 J: W2 d6 Z% Usource $HOME/.bashrc) p4 Q4 I% m7 Y% u7 {& Y
下载二进制文件
- @# u& Q0 e8 K& c0 c其中一台节点操作即可; a# ?/ }0 h+ Q
2 u! m1 @/ u2 e9 W
github下载会比较慢,可以从本地上传到 /approot1/k8s/pkg/ 目录下
" [, ]: X2 l& U* ^9 o4 J0 C3 R7 h  g
  ?+ n& g3 y* P: P' Ewget -O /approot1/k8s/pkg/kubernetes.tar.gz \
" I% z7 n) s& V" U7 F( C5 j" e$ J, y' jhttps://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz
$ u. K5 h0 E$ f9 g: A+ u+ x1 T  b( s( `/ z% S! y0 V/ L
wget -O /approot1/k8s/pkg/etcd.tar.gz \7 c6 I1 G% C7 z/ @9 ^
https://github.com/etcd-io/etcd/ ... -linux-amd64.tar.gz
7 c9 E) M1 I+ c解压并删除不必要的文件& R' x* i* F) ]

+ H4 D& c: b' M6 ^) T$ D; ccd /approot1/k8s/pkg/
$ b7 a* w# M& E3 v4 e7 s! Zfor i in $(ls *.tar.gz);do tar xvf $i && rm -f $i;done2 q+ [5 a8 B% I: C9 D9 e: `
mv kubernetes/server/bin/ kubernetes/. d5 H+ b; L& B+ [
rm -rf kubernetes/{addons,kubernetes-src.tar.gz,LICENSES,server}
2 D5 L0 o! m. W! X$ c) E3 Urm -f kubernetes/bin/*_tag kubernetes/bin/*.tar
; C* `6 ?% j4 }0 S7 }2 ^rm -rf etcd-v3.5.1-linux-amd64/Documentation etcd-v3.5.1-linux-amd64/*.md
  O3 _+ s" B: v) n部署 master 节点% J% R5 l! b' Y; `& Y2 P9 n) u
创建 ca 根证书, x: s2 j5 d6 K
wget -O /approot1/k8s/bin/cfssl https://github.com/cloudflare/cf ... l_1.6.1_linux_amd64/ `9 s  e% x' }$ F/ k8 q$ [
wget -O /approot1/k8s/bin/cfssljson https://github.com/cloudflare/cf ... n_1.6.1_linux_amd64
/ Q9 _0 s, a1 K, zchmod +x /approot1/k8s/bin/*
. ]/ T; K0 y4 U8 ]) |% gvim /approot1/k8s/tmp/ssl/ca-config.json  ?' D& H- o0 i8 }
{
9 i* t7 s7 _) V: P9 P6 Z  "signing": {4 Y% u# @% z- P0 r# V
    "default": {- p4 U/ `/ o- d( p: Q
      "expiry": "87600h"
9 c8 Q9 V. ~- \9 W2 {2 i    },
6 D5 h* s$ n+ S. [, H6 G, g    "profiles": {
: ~+ Z9 q  ]( u# [5 y6 q# \      "kubernetes": {
, N/ B4 e: N7 S+ G0 @        "usages": [+ Q* {# l" X' Z% b! H' j
            "signing",
! X7 d0 E9 a% G2 n% M0 J9 }            "key encipherment"," }% d& m# w' _! b! P
            "server auth",- L) W# l' K- f$ ~( V4 ~
            "client auth"
3 o/ y* f8 o9 P        ],
9 Z4 ~0 Q) r1 ]5 X) A        "expiry": "876000h"; e/ w6 S  V* d1 m; V* z8 V( a: `1 \
      }
; c5 _/ }4 V( X    }$ E# F0 C! [, m2 ?% J+ a: d
  }( z- u% o- A1 b% t3 b( b' H4 G$ K& R; E
}
/ S9 V  G- J  svim /approot1/k8s/tmp/ssl/ca-csr.json
  K  Z5 u# M! `6 Q. h{
3 u- w6 e" z1 `/ g6 _  "CN": "kubernetes",+ L$ p8 W3 n/ h# K  b
  "key": {
, }/ t* E; T$ B$ A4 c2 ?. e    "algo": "rsa",8 B- c$ |3 L2 O, q/ F
    "size": 2048
  w, }: j7 d0 ]' _  },/ v7 i" j+ N7 c% F1 Y' Z
  "names": [
+ ?% U' U- X- l% J9 r    {+ h/ M' G) z% V; e
      "C": "CN",7 h' z3 S" i! U
      "ST": "ShangHai",% ?- S3 Z  [8 W* O1 \4 Z
      "L": "ShangHai",/ [; d- Q. [& l2 d" m0 A/ r5 \
      "O": "k8s",. z. ^/ }& Y. i! {* w/ W
      "OU": "System": E  z7 {; p+ u# u7 n/ E6 B% v( u
    }
1 S1 S$ W, u4 `  ],
! ~+ ~, K0 o, R; t/ B% o  w  "ca": {# Y/ r$ @% n  j2 g7 d/ \
    "expiry": "876000h"
5 M/ q: m3 e# z7 F# _ }* K  S/ U0 |, O* _1 X# |
}
8 G- [' K2 r0 u. jcd /approot1/k8s/tmp/ssl/& E9 o9 x1 l) L- e/ P3 s  V- L
cfssl gencert -initca ca-csr.json | cfssljson -bare ca8 y6 {+ Z7 b$ o! t/ t  M! b
部署 etcd 组件0 F7 h; ]( y* L: u/ |# A
创建 etcd 证书! d( _! u/ J4 u
vim /approot1/k8s/tmp/ssl/etcd-csr.json, B8 x$ o8 o5 ?4 ?' g% w# }
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴5 `1 `# y) U9 d  u) x) N
9 \) s7 H% N( }7 k6 p# D  W8 C
注意json的格式. A9 h- `# s- `% g' p  L: u1 A
& F: `) ~' E- ~4 c6 g
{
& \  O8 F6 k) |! \7 v0 Q  "CN": "etcd",4 L0 }. w3 G7 T* f9 Y
  "hosts": [4 g6 I% F$ |  F( l4 [" b
    "127.0.0.1",
! ^3 w- u# V# p- S    "192.168.91.19"; W3 U8 L# x$ A2 `' N/ ]+ W8 s9 ]
  ],
2 t! ^; m; j; Z6 K0 d  "key": {
* }8 t) M% a3 r( Z" m9 k  D    "algo": "rsa",
0 p* c! d" B! b# z) \* w$ h' Y1 [    "size": 2048
) T0 V% ~+ u  `% R; }9 D7 c  },3 Z  t) p  ]; Z1 y8 z
  "names": [
' Q- r4 ]5 `: O& d9 s    {5 u' J9 k( a) `7 Y, C
      "C": "CN",
$ _, t) x0 ^8 k; \5 [0 y      "ST": "ShangHai",% P. O. g, e( ^, O9 W
      "L": "ShangHai",7 l7 e+ [9 i, }" W" y, a
      "O": "k8s",8 H2 ]! j  w+ v
      "OU": "System": ?5 R+ n/ v+ N
    }
* `+ r5 [! v4 c& n8 x  ]
0 `# d% ]7 \& N! {2 c}- F" D: B) s" N$ I6 w; c4 I$ T
cd /approot1/k8s/tmp/ssl/
" L5 i$ |4 J2 K2 _4 D9 ^7 qcfssl gencert -ca=ca.pem \
6 ~# }: R7 y7 F- Y! w-ca-key=ca-key.pem \7 g# G1 h7 H$ w# N
-config=ca-config.json \0 X" `- t/ K( ]/ S& Z+ n* l$ h
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
; j# u$ `; x8 f8 I7 B3 b配置 etcd 为 systemctl 管理
" T, m# l- [3 b+ X$ }vim /approot1/k8s/tmp/service/kube-etcd.service.192.168.91.195 k4 Q- W7 R7 l  X
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
4 ~: a* Z( \3 S0 h  ~7 J( t  \7 x1 u/ Y' T
etcd 参数
6 V% E  |& _5 z  Q- l- p
' C( {5 w7 B2 z8 p$ S9 X0 B[Unit], p2 O, p) b6 L- {7 {( n# |' O8 E
Description=Etcd Server! [4 _, K: A, Y
After=network.target5 v" v* m0 a: F" ?9 {" l; N
After=network-online.target2 C/ [( x( P! r  X1 f: L% r
Wants=network-online.target6 p: @$ ?  B: v& u% l
Documentation=https://github.com/coreos0 ^3 m. j5 d* W1 k9 {/ F
, |& n2 i& `+ j- h0 ]2 R
[Service]
+ |- ~+ x4 D7 T) A8 aType=notify/ _! a$ r! o* X8 v/ F3 L( y
WorkingDirectory=/approot1/k8s/data/etcd
! ^; G, N' k, b7 g8 a; [/ nExecStart=/approot1/k8s/bin/etcd \- Z" }5 p% `  g" R4 D* j
  --name=etcd-192.168.91.19 \- ~+ |' f( e5 [& C2 R' U% C3 D) T
  --cert-file=/etc/kubernetes/ssl/etcd.pem \1 Q: g4 l( n, Q0 ~' g! ?
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
) H9 {) v! A4 r( }  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
4 X+ {6 k4 R. O7 ]: S, z, E  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
# `$ I$ u9 S9 J7 A  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
3 O$ d% q( F8 V  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \; Z( l+ M) N# o: x- c, F, m
  --initial-advertise-peer-urls=https://192.168.91.19:2380 \
  }2 e/ t' N, S+ m1 e  --listen-peer-urls=https://192.168.91.19:2380 \2 G% ]: \# W! ?; P3 U1 d
  --listen-client-urls=https://192.168.91.19:2379,http://127.0.0.1:2379 \
' p$ O0 G8 V  j! ]9 O  --advertise-client-urls=https://192.168.91.19:2379 \
+ M$ K+ G, n2 E  --initial-cluster-token=etcd-cluster-0 \0 q4 T& E% J0 h& Z  ^
  --initial-cluster=etcd-192.168.91.19=https://192.168.91.19:2380 \) R4 Y6 A! i: q( a! r3 q, B
  --initial-cluster-state=new \
: q  v8 d' t6 d# y! _. U  --data-dir=/approot1/k8s/data/etcd \3 j" G, @1 g! {/ C$ h
  --wal-dir= \6 O2 e5 O& K. M/ d/ E- `$ J1 y
  --snapshot-count=50000 \8 K  ?8 [! B0 U+ A$ T5 N  n# ]; O
  --auto-compaction-retention=1 \
9 ~# {7 G6 Y! H* B* ^8 L  --auto-compaction-mode=periodic \, O  Y% Z- V2 c8 m/ {8 E: [3 T' @( [
  --max-request-bytes=10485760 \/ I+ B* F! C& B- N
  --quota-backend-bytes=85899345925 `$ z' y7 W2 N0 C1 q) X
Restart=always
9 a" @, ~1 F8 C+ {0 aRestartSec=15  y/ Z  u1 w7 c1 f
LimitNOFILE=65536
. I9 r7 }* z- X- C1 t0 ^OOMScoreAdjust=-999/ L  l1 E% F% j! [
9 Q/ T3 Z( C3 J+ F# i
[Install]) U% s  A: t8 t" n8 ~0 N
WantedBy=multi-user.target$ _, e! Q% Y5 p. p4 m5 K! F  D
分发证书以及创建相关路径
# K" R! K3 k- v) N如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
. F) N9 b" C* r' G7 k
/ ?0 }! U4 N9 X) F. s; y对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败( h+ w, S4 r' q

8 K) N, F. P; Q* c9 I4 r) Xfor i in 192.168.91.19;do \5 x3 O" X0 V. P+ s" m8 ^& ^- u
ssh $i "mkdir -p /etc/kubernetes/ssl"; \$ G( M, W+ }% {) D+ j0 j# ^
ssh $i "mkdir -m 700 -p /approot1/k8s/data/etcd"; \
3 d/ [& i6 M7 i% f! ]ssh $i "mkdir -p /approot1/k8s/bin"; \
6 q- y. C5 }$ Y: A0 dscp /approot1/k8s/tmp/ssl/{ca*.pem,etcd*.pem} $i:/etc/kubernetes/ssl/; \. a2 X# `7 e7 B
scp /approot1/k8s/tmp/service/kube-etcd.service.$i $i:/etc/systemd/system/kube-etcd.service; \
  Z. x! T5 @8 l7 T2 }7 W) zscp /approot1/k8s/pkg/etcd-v3.5.1-linux-amd64/etcd* $i:/approot1/k8s/bin/; \
* X( `$ ?6 ~* r+ fdone
/ I* E1 ]" i- i: f启动 etcd 服务8 V4 g" t7 h! F+ ?* k) `1 ?
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
* D; {# r2 S) E: A6 q4 v
& m& @& G) Q6 I+ rfor i in 192.168.91.19;do \+ }) r! z+ p  l  t5 @( b, D
ssh $i "systemctl daemon-reload"; \
; R  v' j" [. K; Sssh $i "systemctl enable kube-etcd"; \" J) O7 B8 \" C2 A1 o$ a. a
ssh $i "systemctl restart kube-etcd --no-block"; \
7 z6 @! g; j; [5 @8 }ssh $i "systemctl is-active kube-etcd"; \
$ A9 C6 z( s" f0 E4 ~% }) G/ bdone
! m5 j9 P9 M) }  o返回 activating 表示 etcd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-etcd";done
0 K, l2 K; D* N+ y- i4 @3 U% l4 s$ ?
返回active表示 etcd 启动成功,如果是多节点 etcd ,其中一个没有返回active属于正常的,可以使用下面的方式来验证集群9 k, G0 t+ A5 ~, U) h8 O7 v
8 P, D. f$ T: G, U8 N! }
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
' s3 N& i/ z8 y1 j% \4 k6 Q/ Q8 O( L' A% h7 Y( f
for i in 192.168.91.19;do \* n4 J$ ~* A* I7 B" z  h5 A
ssh $i "ETCDCTL_API=3 /approot1/k8s/bin/etcdctl \
& U2 N: K, O! e) B        --endpoints=https://${i}:2379 \
. R) X( {( R: h+ J+ R2 V  Y7 q! m        --cacert=/etc/kubernetes/ssl/ca.pem \: r" M7 ^* y  _
        --cert=/etc/kubernetes/ssl/etcd.pem \
# I* U: u, _3 @! C        --key=/etc/kubernetes/ssl/etcd-key.pem \* T2 ?3 ^1 x1 d/ P+ j8 m7 e6 H
        endpoint health"; \  B8 H3 {0 l4 o4 w# v
done
, E0 V# {. d* ?, G  a8 G% _https://192.168.91.19:2379 is healthy: successfully committed proposal: took = 7.135668ms; z4 U  E! Y. W
3 s! u2 M- G  j
返回以上信息,并显示 successfully 表示节点是健康的
1 b8 q0 q' D5 P
2 C6 ^6 s5 i8 P. P9 ^) h( o部署 apiserver 组件
( W3 n1 ~1 ^4 ?6 p% `创建 apiserver 证书. L& K" }6 T0 z- d
vim /approot1/k8s/tmp/ssl/kubernetes-csr.json- [, _0 j6 F; J
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
1 q  R( L5 k! B0 [
' t. }$ _. X3 Y3 J/ f) c) b注意json的格式- Q+ Q  F  U& k) J" J! w. k
' m* O6 s. V, B5 T. r& U8 [: Y' v
10.88.0.1 是 k8s 的服务 ip,千万不要和现有的网络一致,避免出现冲突
* B5 X, L6 C- K, u; a& ]3 g4 X2 Y7 Y6 q2 `
{
$ a& D' z/ j# Z  "CN": "kubernetes",9 k; k; q* A) [: h( y1 I) ]1 l
  "hosts": [
, o/ [2 R8 J' v  Q    "127.0.0.1",1 J8 ^2 K+ k. j- a, t
    "192.168.91.19",8 @0 [! O0 Y" m6 ?
    "10.88.0.1",  ~5 V" F; \5 m5 g
    "kubernetes",9 Q  [7 g/ m, \# k
    "kubernetes.default",
( T8 I6 M3 q# V6 `! R; g8 T' |: t    "kubernetes.default.svc",
( X6 e) i8 Z, ?2 a1 P  m8 o: @% {    "kubernetes.default.svc.cluster",) s4 {+ |, W: S4 c
    "kubernetes.default.svc.cluster.local"
/ |9 _1 s& Y4 _. z. h  ],
! ]- ?! _* d# D# J' `7 M  "key": {
7 r7 X: v4 N4 }+ b+ e* a    "algo": "rsa",
8 p6 w' R8 v5 A    "size": 2048
- N( q# J& S% B) @: I* c* ^. S. j  },  v0 h! S6 W' n9 u% C- J% f* t
  "names": [
* `" {$ z# Z. ?% `3 n! D) d1 p    {
( A5 ]5 {& Q8 [3 |- M) A      "C": "CN",/ i& x! U8 C8 F( e: @# B8 q2 F
      "ST": "ShangHai",
# r) O1 ~+ u/ ^3 L/ P8 I2 U      "L": "ShangHai",
$ |+ E1 Z( |7 W4 ~4 }5 n6 f. {      "O": "k8s",
7 n) H% g. |7 E; C5 `2 I$ S      "OU": "System"
) _* b. H: [; h    }9 @8 s( _# i& [
  ]
# i4 R9 n1 }" ?6 S# N8 W}
9 n1 n' b, ~3 v2 Q6 k& U3 x  ucd /approot1/k8s/tmp/ssl// c' p5 F' y, ^! F
cfssl gencert -ca=ca.pem \' g/ n- A  }- R% @$ n7 b
-ca-key=ca-key.pem \0 c+ b% A6 @0 W" y& U
-config=ca-config.json \) D. U& [6 `7 q& y- c, W
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes6 V; B; q; s- C' u  v
创建 metrics-server 证书0 L, X0 u) k' k9 t  ?, N/ h0 ]( h2 ?* n
vim /approot1/k8s/tmp/ssl/metrics-server-csr.json
5 Y! V9 d+ W* U2 w2 E9 R# h{. X& Q. W! ?) F! v6 @* R2 H
  "CN": "aggregator",
0 O% y. _/ ~: {3 p1 G- Q7 `  "hosts": [
- j& M8 S1 E5 l; Y  ],
  V; n" \3 F2 f: P; k  "key": {
# ^9 i9 Q$ z' ?: r7 [    "algo": "rsa",
0 o* B. h, e4 w- Y6 R  z2 x    "size": 2048- ^: _6 Q" o$ q9 E: O4 v1 D0 }
  },4 u6 w  I) a( _
  "names": [/ w5 \# x- Q4 j
    {! L8 O$ t  W1 w2 s/ K6 _( z3 B* i: o
      "C": "CN",1 |* Z8 d  a3 T3 ~
      "ST": "ShangHai",
5 `, e% i- n0 n8 f& E/ Y3 W% B0 `8 |      "L": "ShangHai",
% f, Y( r9 L+ f1 v( g5 T8 m$ H      "O": "k8s",
" t4 c+ V$ M# T! |7 x      "OU": "System"
( M/ g; O5 R- c1 D+ [    }: \' b+ H2 K( |
  ]
" d7 O; {/ H  ^6 m6 C}
1 N& k3 F8 v0 T: H7 Q- C, V' Qcd /approot1/k8s/tmp/ssl/! o. k& w- R$ S- t1 _) s
cfssl gencert -ca=ca.pem \5 y3 \2 C$ |# V; N9 [. U
-ca-key=ca-key.pem \
! L& j0 H  Q0 h" B-config=ca-config.json \
. ?. u( {" z3 V# C3 q-profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server( w. J9 V$ J  F7 n
配置 apiserver 为 systemctl 管理5 C" k$ K+ w- y, d: g
vim /approot1/k8s/tmp/service/kube-apiserver.service.192.168.91.19
; Q" c8 [# @- H# u6 k这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴' V2 W- l- N/ O

; `7 R6 x( ~0 ]+ v' `--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的
& y" }; E. J' |2 q% }( I# ?4 d( E% ~9 h( P' I" W% Y; |. H; Y) |2 ]
--etcd-servers 如果 etcd 是多节点的,这里要写上所有的 etcd 节点: T& |4 D# z4 B
* I8 r% A5 {: v+ g. X
apiserver 参数
% _# T# v" ~- S% Z6 w( i" \7 B. r/ K) N/ l
[Unit]. _. k) ]4 v! P! m
Description=Kubernetes API Server9 q7 A) v2 J# v9 q% n7 c4 \3 J) u: g
Documentation=https://github.com/GoogleCloudPlatform/kubernetes. |( T5 g0 [& o% l- y2 }
After=network.target
1 Z- V, H; z+ @' B1 _
  n+ \) o. Y# V8 N3 F- c* g( _, m[Service]0 c5 J3 P0 f: C  m5 T
ExecStart=/approot1/k8s/bin/kube-apiserver \
* `# u& A* e$ K3 n7 i) B  r* r  --allow-privileged=true \
( V( R% H" E3 _* w  --anonymous-auth=false \9 W$ a9 P( Z( R; n* K8 `* u8 K  \
  --api-audiences=api,istio-ca \
: H* m9 E1 c4 A/ x  --authorization-mode=Node,RBAC \
  i$ w; {& Z% o  --bind-address=192.168.91.19 \$ j$ E# z/ i1 v
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
( e7 h+ ]/ {: n* T, B- i# P- v- f/ E& g! [  --endpoint-reconciler-type=lease \4 x+ z5 y' Y8 W6 K6 o. u
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \  K. N, y3 }! {. b- H+ e, {
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
) G+ R, g6 I. g; }$ c9 O& J, r: s- j  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
8 @: P: O2 I' ~) v2 k  --etcd-servers=https://192.168.91.19:2379 \
# a2 o, T( t  ?* v4 M  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \, T4 j% q( I% }1 D
  --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \! C1 z( k4 K, O, u8 b: A) P* \
  --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \- A+ V$ J5 V" K- f4 T7 J- _
  --secure-port=6443 \9 \$ N% S* L7 J" \
  --service-account-issuer=https://kubernetes.default.svc \
8 h! l$ _' a- F  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \' e9 p7 K7 }6 n  w
  --service-account-key-file=/etc/kubernetes/ssl/ca.pem \$ G8 l, ~$ g9 ?
  --service-cluster-ip-range=10.88.0.0/16 \& D( l, D* K0 ~6 ^% d' i
  --service-node-port-range=30000-32767 \
( T5 U9 Q+ H# R% ^" s  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
8 m5 E: q3 C4 F/ z, q; B# ^  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
( C3 e. u2 a' j; j; C: j6 j  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \* y$ d1 r0 v) e
  --requestheader-allowed-names= \
- q# c7 s3 u& R) i+ q: }2 E  --requestheader-extra-headers-prefix=X-Remote-Extra- \
3 C$ T* u( i  \( p  --requestheader-group-headers=X-Remote-Group \6 d. N: _; j9 f. K
  --requestheader-username-headers=X-Remote-User \
* T: f- T$ g+ S  ^) v  --proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \
* x! T. H" J- D+ [  --proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \7 t* U+ ?3 P2 ]8 j) |; S
  --enable-aggregator-routing=true \
* H, ~$ n2 ?- {5 |, [- E' N5 t7 W$ s/ W  --v=2
+ t) g- Z- @& I- yRestart=always
% X: b0 T8 s4 a) ]+ LRestartSec=54 \2 @9 b+ j; s) T7 p  _
Type=notify, R9 l4 j3 _: n' q- @2 `( x
LimitNOFILE=65536. l# s5 r1 t0 ?( D% H
* J- `3 n& K5 ~6 Z( ^4 w
[Install]
/ v+ S/ S+ v3 h* o4 x8 vWantedBy=multi-user.target
9 z8 K8 y1 z8 ?分发证书以及创建相关路径+ {5 j3 u, m, F' ]3 m  R$ _8 ^. ?
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
( _8 x) \+ y& w- y; ~6 |  Q9 _/ ~$ p" [1 k+ w! M% y! S
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败  M9 o; }+ }2 r0 W" W
, q9 F7 B2 E5 ]7 `. R- @$ b
for i in 192.168.91.19;do \7 E/ W1 o' ?4 D& K+ D& P
ssh $i "mkdir -p /etc/kubernetes/ssl"; \* T! t1 z' {/ w2 F' b: i5 v
ssh $i "mkdir -p /approot1/k8s/bin"; \& ]2 i6 g- a: o: f/ p* N
scp /approot1/k8s/tmp/ssl/{ca*.pem,kubernetes*.pem,metrics-server*.pem} $i:/etc/kubernetes/ssl/; \9 m) m2 n4 E% F& U/ K4 I
scp /approot1/k8s/tmp/service/kube-apiserver.service.$i $i:/etc/systemd/system/kube-apiserver.service; \0 P% m* y$ w& y! t5 z
scp /approot1/k8s/pkg/kubernetes/bin/kube-apiserver $i:/approot1/k8s/bin/; \5 t! U! \  n- H1 l) n
done  `1 b4 T( T; s4 z1 F
启动 apiserver 服务
+ `" x( u8 V0 O# T% o如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
* L# a% b5 n0 P  U  y2 E+ e; F  z8 n1 u/ u
for i in 192.168.91.19;do \) F4 a1 l" E. W& [! I: L
ssh $i "systemctl daemon-reload"; \: v* m2 L1 j9 T2 `6 J2 F
ssh $i "systemctl enable kube-apiserver"; \* u+ B# p0 A  Y" E0 \1 H) v5 ]9 I
ssh $i "systemctl restart kube-apiserver --no-block"; \
- ^' r# S: n0 Lssh $i "systemctl is-active kube-apiserver"; \+ I9 c. n; X6 r& {) ~/ x
done/ |, o* R, `! `" ]$ `7 u4 g% V
返回 activating 表示 apiserver 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-apiserver";done
1 _5 o+ }' A; `8 Z  d1 m8 U( {0 p  @! p5 q( q6 a7 h5 I4 Z
返回active表示 apiserver 启动成功
# l5 C3 _7 _% Y0 N) u
8 d) h9 H( d' ^7 _curl -k --cacert /etc/kubernetes/ssl/ca.pem \
  s% n0 i7 p/ W3 ?--cert /etc/kubernetes/ssl/kubernetes.pem \
- S; K' r0 m1 n4 Q( O--key /etc/kubernetes/ssl/kubernetes-key.pem \* _/ j! K% M1 |6 Z' a
https://192.168.91.19:6443/api
& J$ f8 {' d9 L4 {: Z4 y6 s正常返回如下信息,说明 apiserver 服务运行正常
7 ^2 P2 r& i6 e1 ^& E
* H# [( z+ \) F9 P. V0 G{8 V8 k! H( z' ~9 f0 P/ g9 _( z
  "kind": "APIVersions",
, p# O- Y* z- ~& ]$ X  "versions": [4 V6 t: B% y' S6 H' u
    "v1"
% o5 m+ [* W+ n$ A4 i2 V+ G  ],
. _% S4 ^; N' n9 r  "serverAddressByClientCIDRs": [8 [: J) F  m9 K5 [: G7 h
    {. _. @% c9 m$ t4 g0 j% P
      "clientCIDR": "0.0.0.0/0",% t: b) m% p  c! `1 U$ o3 J$ }$ ^- I
      "serverAddress": "192.168.91.19:6443"
& {( i* A- d: ]    }
$ I, r" }! q1 C" B( v# D$ P  ]# N5 L) W% n6 e5 A. {8 U
}
# _. ~& f3 [5 S查看 k8s 的所有 kind (对象类别)
9 m8 h/ s: w2 K& P' K
! r2 P/ d& A0 X) w: Acurl -s -k --cacert /etc/kubernetes/ssl/ca.pem \
6 u: l) B" ?  D--cert /etc/kubernetes/ssl/kubernetes.pem \
9 }# A9 a' _( b: r7 ]- Q4 z--key /etc/kubernetes/ssl/kubernetes-key.pem \
: ~; a2 X* i- \https://192.168.91.19:6443/api/v1/ | grep kind | sort -u
/ d% _6 [$ V" X! Y- L$ x  "kind": "APIResourceList",
1 c' N5 j, j+ \      "kind": "Binding",
( Q" r; L( M6 x( Z. X) T      "kind": "ComponentStatus",
: r, J* Y$ @! ?6 ^/ k# u! ?      "kind": "ConfigMap",
% y4 w4 p) T/ ~* @! c6 u# Y4 g      "kind": "Endpoints",0 t8 ~, Z; E1 d' J9 _8 t
      "kind": "Event",
' e8 Y. w- u% L$ F/ z0 n      "kind": "Eviction",# e0 o; [0 R1 [: i, P
      "kind": "LimitRange",2 l/ v, j* H+ |
      "kind": "Namespace",
* T, H  K& h8 |0 h' {4 M      "kind": "Node",
6 w! M. O+ ?) S: ~  F      "kind": "NodeProxyOptions",9 T( j. U9 ?6 r% |( H
      "kind": "PersistentVolume",. }" P& F, W4 J# j# |& p8 y1 |
      "kind": "PersistentVolumeClaim",0 J$ x  n" w8 ?# D: p% k
      "kind": "Pod",
+ ^9 U, w: J" d* y, _3 u- F      "kind": "PodAttachOptions",3 V3 D) ?0 K$ q7 y
      "kind": "PodExecOptions",
' R  o3 ~0 Z5 |4 R9 w6 z& n2 `      "kind": "PodPortForwardOptions",* x% G1 W& ~$ l7 K
      "kind": "PodProxyOptions",6 H4 {+ w4 T* }; p
      "kind": "PodTemplate",
& |, \+ J4 m) d  V1 B; E9 D, U0 V      "kind": "ReplicationController",
+ z* y* ^9 i, D      "kind": "ResourceQuota",1 w  t& @, j7 C
      "kind": "Scale",
: e5 P) F2 W9 [2 G      "kind": "Secret",+ C3 Y# t" H: V2 Z
      "kind": "Service",7 Z3 o2 h8 v: _" ]% R8 b0 n* J
      "kind": "ServiceAccount",
# s$ |" J6 D* U, j' k- Z3 \0 ?2 Q      "kind": "ServiceProxyOptions",
/ B' w  H8 w% P; G, Z& s& a5 u      "kind": "TokenRequest",
4 h! ^( A9 h1 s4 U% a. I配置 kubectl 管理+ N6 V5 C6 Z# R1 F
创建 admin 证书3 _! E1 t+ T3 P% l  C
vim /approot1/k8s/tmp/ssl/admin-csr.json7 I6 X: x) n# c
{5 X! n2 P4 Y- q7 y3 R: j
  "CN": "admin"," j4 {* w! `/ A9 J; v* L
  "hosts": [  C- c4 n' X9 h& w' r$ A, e+ J! G
  ],
# X* l. ]  ^! ~& y. H  "key": {( W* x& z+ d3 H0 ?5 T2 B0 p9 T
    "algo": "rsa",: }: S# z! w6 D. D
    "size": 2048/ G( N( L1 D8 L  `
  },
  I; L1 U1 ~0 Z! I6 t' K% `0 |" I6 f  "names": [: t# K! K7 d( T% Y
    {+ ~+ f, ?6 D( \$ t
      "C": "CN",  l: ~' g- V; p- b) G
      "ST": "ShangHai",: }* [! Z. c( Z  R: K. H
      "L": "ShangHai",1 m$ D' I) J$ F9 @1 q: ?' R; O
      "O": "system:masters",% _/ M! e: r0 v8 F
      "OU": "System", [6 R5 n0 S- |/ R0 V
    }& K4 @6 c9 |' b+ R% P# M
  ]
0 p0 I/ K2 k3 `9 [& b- x, z}4 U% o" n; i( i7 a
cd /approot1/k8s/tmp/ssl/, E  {7 r, j8 Z& ]  W( H
cfssl gencert -ca=ca.pem \
7 ]) p- ~! z! @2 _/ P& u/ S3 u-ca-key=ca-key.pem \
- \9 b, C3 I  F' [( k-config=ca-config.json \  m$ r% f& q: u( E- ?7 y/ ]
-profile=kubernetes admin-csr.json | cfssljson -bare admin
( w$ m6 ~' \" l0 s- a4 \创建 kubeconfig 证书% H9 f" q. N9 K$ i
设置集群参数0 u$ f! P5 _% y
8 l* A$ C, _& R! x9 N" O: V
--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
3 _* \: a. l/ k. \2 m5 |" P7 V
+ m8 d$ p! _& f* T' R# F9 Dcd /approot1/k8s/tmp/ssl/
: O/ O+ P  D2 u% L5 V/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \- r* c4 f/ x4 o: u% d' @5 n
--certificate-authority=ca.pem \( T6 k! a) |0 M8 J4 J( V# o+ J
--embed-certs=true \% Z! _. E8 h2 A
--server=https://192.168.91.19:6443 \
+ A8 I8 s" i; r9 K1 d7 E% _--kubeconfig=kubectl.kubeconfig
, F0 O0 W3 z. `0 X设置客户端认证参数
3 Y5 k, ]+ G$ Y+ }- L/ J7 c# Z1 C9 N( p* J
cd /approot1/k8s/tmp/ssl/+ _% ^9 u* i8 X, |9 [' N
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials admin \
- P+ y- h7 p$ J4 v4 c2 V% W--client-certificate=admin.pem \
$ u. T' X0 |7 i- X' G7 u--client-key=admin-key.pem \
! ?1 {6 _5 s, i8 g, A/ R  ]--embed-certs=true \
9 U  B) g  {9 n- F+ n9 a- L  f% F--kubeconfig=kubectl.kubeconfig; ~( l1 T$ q# A( r$ e
设置上下文参数
# ?' H0 n% V  A. X0 R* l/ l% o$ q7 R7 }8 _& j
cd /approot1/k8s/tmp/ssl/
4 ^( y9 `0 A5 r/ R" R/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context kubernetes \0 C- ?& S6 M- o' l% W" M: {  M
--cluster=kubernetes \
6 ~3 R2 `6 C1 a9 `2 I2 A3 D--user=admin \
/ \5 M1 x6 y+ y0 c; z: O--kubeconfig=kubectl.kubeconfig
; k( ~1 J* }7 w4 G设置默认上下文9 K& \. S* i: ~5 `3 z

2 p8 ^! ?7 P2 k0 z0 {- ucd /approot1/k8s/tmp/ssl/. V2 Y7 `% U- g& g8 F
/approot1/k8s/pkg/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
9 }5 S+ |/ |$ `# C+ V: A) L- V# k分发 kubeconfig 证书到所有 master 节点
& i8 u, O' {# n% b! }如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制$ L2 W/ _9 Z4 ?( j; T/ Z5 @

6 j' W) l/ m/ @1 Q9 s, wfor i in 192.168.91.19;do \
4 @8 K" q' @: a! c/ b5 L+ B6 g1 f6 Qssh $i "mkdir -p /etc/kubernetes/ssl"; \
  t5 E6 ]+ A5 _* @ssh $i "mkdir -p /approot1/k8s/bin"; \
. L! S/ j# o. T3 N7 cssh $i "mkdir -p $HOME/.kube"; \
& d  q. ~9 U7 \scp /approot1/k8s/pkg/kubernetes/bin/kubectl $i:/approot1/k8s/bin/; \
: U8 M1 s) S6 o9 Qssh $i "echo 'source <(kubectl completion bash)' >> $HOME/.bashrc"% b# J3 c0 v) g& |. n/ M* I
scp /approot1/k8s/tmp/ssl/kubectl.kubeconfig $i:$HOME/.kube/config; \
- Z# X8 K: C3 W* Edone
9 @+ r& T' q* q9 C部署 controller-manager 组件8 Z" V) j# f: f# G
创建 controller-manager 证书! @8 Y) ]  `/ k! r1 B/ t
vim /approot1/k8s/tmp/ssl/kube-controller-manager-csr.json
1 h' I6 T; ^* O4 g, ?; Z7 a7 `这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴5 l5 I% Q$ m5 R1 D9 i6 ?

, M, D) W% o0 Y8 }( E& a8 J注意json的格式; n. M: ^1 b$ N8 j

/ f% i; {: d& H{
3 I$ y# g. F/ e( a    "CN": "system:kube-controller-manager",
1 J6 g$ R3 }) n$ P' o* a! C    "key": {
) Z" t4 W2 L1 i+ Y        "algo": "rsa",4 N; @$ t4 i- ~/ J, X1 e. U3 i6 b; [
        "size": 20487 `% V* u; U1 _- l
    },
0 k, e8 C! O2 C# t    "hosts": [5 r1 y2 s; W! J# N; x! L
      "127.0.0.1",
0 m6 c. q+ z1 J5 z7 ?, I      "192.168.91.19"9 N. S4 e% P& {+ |1 B3 |3 c8 i
    ],& s! j% ~+ u/ Q4 O! U. z" v' s
    "names": [" a5 h3 |3 H! i3 i- |, q
      {% z* I# X5 o+ c6 B% K3 B
        "C": "CN",3 `; K1 ?4 e7 U! B: `
        "ST": "ShangHai",
! h9 C* o& s3 o9 y5 }1 u" N! P6 R0 F5 g        "L": "ShangHai",
" y, I! u, j1 A: U% O2 C        "O": "system:kube-controller-manager",
1 G1 \2 l, v0 f3 `: ~        "OU": "System"
0 Y( Y" o9 S" h+ P% o6 Z      }
' L/ _$ v# ?. m# h    ]" Q: n( k+ J0 [3 D' U
}& ~, k7 A& x0 ~  @2 C
cd /approot1/k8s/tmp/ssl/
& y4 h4 y, h0 P- M* `' X' s9 m7 wcfssl gencert -ca=ca.pem \0 d5 B+ P" d9 A4 C; m% O
-ca-key=ca-key.pem \
+ l. g0 e3 l( o# ?" v-config=ca-config.json \% Y" N9 o& P0 k5 W/ t5 ?  M7 f+ M
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
0 {9 J( R! }) u9 N; b创建 kubeconfig 证书
5 C+ j4 e0 C4 X6 C+ A& z, }设置集群参数
6 f2 @' z  ~; `2 _" a2 u8 F% B8 ~8 E- r1 U5 R- Q9 n
--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
1 W7 c) g6 }4 P. O
" y* F; h* j, b& k6 Ycd /approot1/k8s/tmp/ssl/0 `  a( N, I, P4 C9 ~9 H( [; s* T
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \  \+ H2 y8 g2 f. F( a
--certificate-authority=ca.pem \5 S- `1 d5 Y1 F' U) Y
--embed-certs=true \
1 p# x. ?1 a# j! L; F" P--server=https://192.168.91.19:6443 \6 \1 v. ^: K, \$ [6 I3 T0 Q
--kubeconfig=kube-controller-manager.kubeconfig
1 C' b3 }/ ^, s$ ^& H- L设置客户端认证参数
2 H: B+ r& z2 k% n- I# o" W1 m
% n( m3 f% t: D: j8 O  Pcd /approot1/k8s/tmp/ssl/( Q/ J  \/ L% W
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-controller-manager \
8 I% m! q8 r0 S0 c5 G) |--client-certificate=kube-controller-manager.pem \( S9 W4 H" F1 ~9 F: C$ x; N
--client-key=kube-controller-manager-key.pem \# o  k( d: ^5 v& l! A
--embed-certs=true \
- ~" P; f6 H% [- o) \$ B- o) v+ L/ t, r--kubeconfig=kube-controller-manager.kubeconfig; A; F1 N4 y/ e) u, ?& Z# K
设置上下文参数( x% h; |% f9 L9 E: T

- Y' i) i. b( Z. ocd /approot1/k8s/tmp/ssl/7 B& J, t; V, {" q- X5 \
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-controller-manager \
$ c. l& D) Z5 a6 A--cluster=kubernetes \( T0 Z1 s/ O8 F3 j
--user=system:kube-controller-manager \" \* T" X4 t3 j+ m/ Y
--kubeconfig=kube-controller-manager.kubeconfig( O/ ~4 u8 N- j# \9 {$ t* ]( q+ N8 \
设置默认上下文
* L( O- k; X6 j# e1 A1 k) z1 E9 S! a1 j  r9 K2 {" Q, ?9 w, d: Z
cd /approot1/k8s/tmp/ssl/
" U6 F0 K2 u- q5 i3 o/approot1/k8s/pkg/kubernetes/bin/kubectl config \
: d& a* A2 P( ^, b$ e) suse-context system:kube-controller-manager \5 J0 f) g1 ^5 b" d2 @* H
--kubeconfig=kube-controller-manager.kubeconfig
8 l: W: |% X8 j' G+ |8 Z, U/ E配置 controller-manager 为 systemctl 管理
9 g  h4 w9 d; Pvim /approot1/k8s/tmp/service/kube-controller-manager.service( b/ f. g$ V2 J8 _% Q0 }1 i
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
! t0 Y1 t# c- b! g- h& T; l3 X- {! n' Q7 A9 ]4 G
--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的
9 p5 n5 D) N5 D, v, d. D
, E$ b2 {; q/ K; k% e1 w--cluster-cidr 为 pod 运行的网段,要和 --service-cluster-ip-range 参数的网段以及现有的网络不一致,避免出现冲突
* X7 c" G6 V; p& L" ^2 W6 E
* n' v! y! c1 H  vcontroller-manager 参数  x- G% o. _( J

! l* A2 ]4 |8 b. @7 I[Unit]
0 ~  b# x7 D0 uDescription=Kubernetes Controller Manager. [3 `5 W1 N% g5 o: i
Documentation=https://github.com/GoogleCloudPlatform/kubernetes) D  ]7 X6 ?8 H. q

/ }+ }- L) p; `+ y: \0 ][Service]7 l2 R& v# ?/ e3 h1 D
ExecStart=/approot1/k8s/bin/kube-controller-manager \
  ?' D' O8 X$ M( c4 d  --bind-address=0.0.0.0 \
8 X4 T4 w, c  t2 H  --allocate-node-cidrs=true \6 j) k- C! N) K0 O9 N
  --cluster-cidr=172.20.0.0/16 \/ n5 w- Z8 h# _- p1 k
  --cluster-name=kubernetes \; X, O7 W4 c3 S
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \3 ~" c7 K7 M7 j9 \! X! ?
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
- t) ]$ j- y; }& `  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \2 u& _% N" U2 \# j1 C2 d
  --leader-elect=true \
  P5 a2 h) f# J$ a# n  --node-cidr-mask-size=24 \
/ q3 ?+ `! x. d7 _: u  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
4 y; k. ~! g5 Y& c6 X& B  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
" M) w, W3 P( \% M$ b) E2 M  --service-cluster-ip-range=10.88.0.0/16 \# y2 ^1 j: @; \
  --use-service-account-credentials=true \6 V8 w8 @( o$ f* X
  --v=20 q9 a9 k. a. d0 }8 r4 E: N
Restart=always
0 }" j6 e# t, |# [% ^" a, VRestartSec=5% x, i7 {7 ?8 L7 p3 `8 M

8 K( {3 n) @& E7 o  W[Install]
$ ?4 ?3 i6 E5 e1 E! o+ nWantedBy=multi-user.target) N3 G. r. U" Z& `7 @
分发证书以及创建相关路径
+ R0 g! E$ n( H7 q) j" W5 ~如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
& ~, n+ U& q" }4 u! N* U, Z: t
. |/ \8 h3 O" F+ D  y" N. s0 T; z对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败% _' _' Q) I( X5 t/ m4 W# [$ K
, i! ]3 b* v% m* I, v  g6 |
for i in 192.168.91.19;do \9 m3 f9 X' W5 Z* Y5 T& K
ssh $i "mkdir -p /etc/kubernetes/ssl"; \. R' P) G9 @5 @1 V- x
ssh $i "mkdir -p /approot1/k8s/bin"; \
5 l( b/ t+ ^( F0 E) vscp /approot1/k8s/tmp/ssl/kube-controller-manager.kubeconfig $i:/etc/kubernetes/; \9 w* x0 S9 c8 }- K1 _' L% a1 A2 ?
scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \4 S7 i4 V* U- A3 M7 U) K
scp /approot1/k8s/tmp/service/kube-controller-manager.service $i:/etc/systemd/system/; \
. M+ T: \% ]6 u- qscp /approot1/k8s/pkg/kubernetes/bin/kube-controller-manager $i:/approot1/k8s/bin/; \
; l  u1 S% \  g. @. z' h5 adone
7 [9 K5 P! b, d' Z4 e2 U. \6 @启动 controller-manager 服务9 h, i6 m: d* F* M1 u, r3 v8 z3 o
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
' g& L/ P  T& c$ X" H6 C9 g! J: C, s. a( x( f2 T3 j
for i in 192.168.91.19;do \$ J4 ?  b2 B3 u- L2 d8 i
ssh $i "systemctl daemon-reload"; \
7 _1 F; i: b, g! I4 T4 ^7 z' F: \. Ussh $i "systemctl enable kube-controller-manager"; \
, Y( H, @* A( N: rssh $i "systemctl restart kube-controller-manager --no-block"; \
% s, G7 @& Z' |* Q8 {0 essh $i "systemctl is-active kube-controller-manager"; \- Y( e9 G, D0 u
done+ h) m* B  j8 L) g' J
返回 activating 表示 controller-manager 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-controller-manager";done
  V1 w3 O2 o5 a* s, o* H- W! L1 l$ G  j/ g' V* P
返回active表示 controller-manager 启动成功6 q8 s# V$ X. A6 e

- J1 ~: A& h# P, ?& f( a  w部署 scheduler 组件
5 h6 g, i6 Q# s5 G创建 scheduler 证书4 M9 d$ V. v1 p2 Z1 D# L
vim /approot1/k8s/tmp/ssl/kube-scheduler-csr.json% s+ K8 a$ O1 e( h% n  A/ O2 N! p$ I
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴9 Y2 B$ S: F2 i% i
# d+ S0 a+ |, E  l2 G
注意json的格式
2 P4 }: _6 v# G5 m3 O1 U
, e4 l- e  B% F7 V" q& v{
6 D" \1 I5 K4 h. Y    "CN": "system:kube-scheduler",' u" F$ L  a$ h
    "key": {: l' O+ f% p- T7 X
        "algo": "rsa",& z6 U) m5 B* T/ `* M# Y2 D
        "size": 2048' ]) q$ H. p  b+ h; y
    },) D/ U: k  g! D3 T) A. }7 c6 H
    "hosts": [! Q/ C# N! C. z8 o3 U" I5 S
      "127.0.0.1",) o9 _3 _0 u0 z' \, Y
      "192.168.91.19"9 G! r5 D- a$ r) @6 |
    ],7 g0 |# l' P1 Z
    "names": [9 y8 L2 P  i, H* _. z; D/ a* |
      {! y+ W4 |$ D" t" P3 k# y
        "C": "CN",
+ e) Q+ y/ b; |, {        "ST": "ShangHai",
. G, c; f2 g/ L$ Z5 B0 _9 y        "L": "ShangHai",
9 L5 T7 j: Q: I5 _7 }        "O": "system:kube-scheduler",5 D1 w2 {9 d% k# s% E
        "OU": "System"! x. W. t4 k" D' a
      }
* \5 I5 C" f( O1 o5 Q( ^3 r: d$ g" I    ]& h4 @- ]( I* m
}  e! @9 ^  I; C1 O' k6 d' p8 }6 w
cd /approot1/k8s/tmp/ssl/$ Q( {  C7 d" X+ C8 @6 b
cfssl gencert -ca=ca.pem \
! |0 h9 R( A3 ^-ca-key=ca-key.pem \
/ k$ e. q; Q) I4 ]/ F-config=ca-config.json \
) R9 f1 D& K( j+ q  `* z-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
% ?* x, w4 R% T# P创建 kubeconfig 证书
3 @% g& A- w7 \9 Y* P  t; V设置集群参数
+ P- d# O: ~4 a, ?# N( Q! m4 T" k9 ^4 [+ Q
--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
( C: Q( v0 r8 A/ _3 A, r1 R
* E- k6 K% ?6 T9 w8 k! Xcd /approot1/k8s/tmp/ssl/
7 U4 W% R, r* P- M4 e  ~. v3 i/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
4 a0 r( k  w  ~* ]--certificate-authority=ca.pem \. O" o  n- a3 |* V2 Y
--embed-certs=true \) q7 M5 O' J2 {7 C4 }8 F* h
--server=https://192.168.91.19:6443 \
; o$ ^$ _: [$ _+ Y0 M, ^( i4 b--kubeconfig=kube-scheduler.kubeconfig
# ?& P! _7 T$ ]) E+ i7 L% |) Y5 ^设置客户端认证参数
& ^; `" \$ t2 Z2 [0 k6 b. c0 U8 `' a0 I; B6 A% O" s+ `
cd /approot1/k8s/tmp/ssl/* {# Q4 l# |8 K$ A/ E8 |$ A
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-scheduler \* m) F! S5 o2 M0 [' g; Q
--client-certificate=kube-scheduler.pem \9 e/ }1 _8 O$ E  A' c! F2 u4 A
--client-key=kube-scheduler-key.pem \+ o! P9 e- P, |7 ^; X
--embed-certs=true \
+ `" H- }9 J! d0 {) w* v; u. s--kubeconfig=kube-scheduler.kubeconfig) B4 r! c) S6 Q+ w( X
设置上下文参数
+ \6 L  |3 _( e6 b+ W/ Z
1 g- h; F8 `& E2 V0 p$ lcd /approot1/k8s/tmp/ssl// M( l- E9 w' G% ~8 d9 g
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-scheduler \, H4 \% v' _* y2 d, v- Z& b! e
--cluster=kubernetes \+ X* s$ J! E5 f1 C9 `
--user=system:kube-scheduler \2 ~! {) `# Z: d4 J
--kubeconfig=kube-scheduler.kubeconfig, y6 _9 S0 Y9 d. f- r
设置默认上下文. \" e4 ~3 x4 q. b8 l8 ]. b! t8 q
2 p$ v+ `/ e! t
cd /approot1/k8s/tmp/ssl/
8 c8 P8 b  J7 Q5 r0 |/approot1/k8s/pkg/kubernetes/bin/kubectl config \4 [0 D) @) C: S9 w
use-context system:kube-scheduler \' \$ ^0 E$ \( M
--kubeconfig=kube-scheduler.kubeconfig# t0 N$ o3 L8 g; m0 _3 n
配置 scheduler 为 systemctl 管理
1 o( @: }7 Q5 n  O  mvim /approot1/k8s/tmp/service/kube-scheduler.service
3 C2 z& o7 F* J. d/ o* t: W5 vscheduler 参数
) S. W6 b5 I( j
6 ?: l8 S8 v: Q2 e/ T[Unit]
4 @& ~" m+ R- H! e0 j5 vDescription=Kubernetes Scheduler& r2 ~0 R, F' L1 Y5 C- I$ ~
Documentation=https://github.com/GoogleCloudPlatform/kubernetes; _: ]4 c; f* Z0 @! I

; q7 a: Q2 U; j4 H; ][Service]
' E" \$ }0 g; A! O; g2 g$ bExecStart=/approot1/k8s/bin/kube-scheduler \! p% o, N6 x* `
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
! [5 }4 O- a2 k3 L8 E# ~  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \, b' t: ~/ ]* M* L7 W2 P, A
  --bind-address=0.0.0.0 \
- M. Q4 h. F6 w) h( i# s  V  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \0 }$ A* T& U& F1 K" j7 Z3 `
  --leader-elect=true \
$ O1 }% n+ [, I' L8 y1 B  --v=2" d1 ?5 H4 e, q. \% p0 f; d& U
Restart=always
6 d9 b1 M( h) L' A5 TRestartSec=5
8 s; D) O/ ?2 l# V& T
. g% Y+ j/ g! _& O% I5 k& j# H& D[Install]
7 r) y. J4 q; T: r6 c* @* x3 K# IWantedBy=multi-user.target+ l5 I2 _$ X* A# i
分发证书以及创建相关路径
, R+ F$ z/ z+ y; J! z7 s8 }! _' n: w! c如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
  h2 c3 @. q7 r* m' o$ a+ ~( y( N! L3 }. t3 t6 s
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败
, s4 p4 L  ~9 y" A" a
8 [! @% J/ f1 W: Afor i in 192.168.91.19;do \
6 _& \. }/ ]& Y; vssh $i "mkdir -p /etc/kubernetes/ssl"; \+ y1 K) _% L7 G) I. d. W3 [
ssh $i "mkdir -p /approot1/k8s/bin"; \
  c, g$ P) X" Y0 E9 k; l$ Jscp /approot1/k8s/tmp/ssl/{ca*.pem,kube-scheduler.kubeconfig} $i:/etc/kubernetes/; \
  Q( T. w8 B. v5 e# J& O& tscp /approot1/k8s/tmp/service/kube-scheduler.service $i:/etc/systemd/system/; \* U, F8 R7 R9 s7 k
scp /approot1/k8s/pkg/kubernetes/bin/kube-scheduler $i:/approot1/k8s/bin/; \
6 @2 i$ i+ k. Z: E' adone$ z- ~0 @' _( J1 I7 q% G- j
启动 scheduler 服务2 O+ d" ]9 X, m( V3 m; J4 U
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
* |- Y, l4 Q# D4 H. U
$ x# ~! y2 I- W- w2 Rfor i in 192.168.91.19;do \
4 I8 k( n( d" b/ k, u) }ssh $i "systemctl daemon-reload"; \
" Y+ H3 l0 R" ^  }5 ^ssh $i "systemctl enable kube-scheduler"; \
2 g5 k" f2 a3 i& X5 K8 x9 S; jssh $i "systemctl restart kube-scheduler --no-block"; \' Y! z6 Y" Q( s9 `& V
ssh $i "systemctl is-active kube-scheduler"; \) ~, r. R/ Q0 J8 k- {
done0 G% O! [: d2 h1 c
返回 activating 表示 scheduler 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-scheduler";done
& ~4 Y. l9 l, q) y" i5 W3 y' F
+ ^; S' V- D# @' c返回active表示 scheduler 启动成功
% B9 h2 y/ u; [$ N
/ D# |4 T( L' T部署 work 节点
+ g) f/ f8 c: L! B8 Z% ]部署 containerd 组件
& `) S4 T9 s" N2 a. [! Z( z# f下载二进制文件
7 o/ _3 l; [% E% H( l  S2 S  Xgithub 下载 containerd 的时候,记得选择cri-containerd-cni 开头的文件,这个包里面包含了 containerd 以及 crictl 管理工具和 cni 网络插件,包括 systemd service 文件、config.toml 、 crictl.yaml 以及 cni 配置文件都是配置好的,简单修改一下就可以使用了4 H$ E& [9 u2 _0 u5 J4 t
# y7 a. ^* Y4 G* ~9 O3 ~4 D
虽然 cri-containerd-cni 也有 runc ,但是缺少依赖,所以还是要去 runc github 重新下载一个
% Y% g$ {4 C& w  m. \3 h7 i
1 }' }7 S) s( N! X. n. R& Swget -O /approot1/k8s/pkg/containerd.tar.gz \) \4 o/ {; ~4 [1 D# ]3 R
https://github.com/containerd/co ... -linux-amd64.tar.gz3 Q$ z: E/ i; A- v7 @4 ~5 r
wget -O /approot1/k8s/pkg/runc https://github.com/opencontainer ... d/v1.0.3/runc.amd64; W( X6 h' Q9 d4 l& Q
mkdir /approot1/k8s/pkg/containerd7 q% x& j) h# a  s! F9 Q% F
cd /approot1/k8s/pkg/
: q7 q" _3 R1 z& B: Z# S, S+ afor i in $(ls *containerd*.tar.gz);do tar xvf $i -C /approot1/k8s/pkg/containerd && rm -f $i;done; G3 d6 M. q0 }4 z) `
chmod +x /approot1/k8s/pkg/runc- J6 x, T. n" C, V
mv /approot1/k8s/pkg/containerd/usr/local/bin/{containerd,containerd-shim*,crictl,ctr} /approot1/k8s/pkg/containerd/" B( I% p3 A' Y7 x/ q/ [
mv /approot1/k8s/pkg/containerd/opt/cni/bin/{bridge,flannel,host-local,loopback,portmap} /approot1/k8s/pkg/containerd/
+ o7 m, n" f1 ~, H. d+ vrm -rf /approot1/k8s/pkg/containerd/{etc,opt,usr}
8 m0 j9 O7 d& R+ F" Y4 b配置 containerd 为 systemctl 管理# v: `6 B  X1 c/ c) C7 C3 n, B9 J
vim /approot1/k8s/tmp/service/containerd.service
5 E+ p' P, c2 K# o  N6 g! Y9 U注意二进制文件存放路径
0 f' G7 N  G5 \) U# m5 ^8 c4 a, C4 v, l& Z* K3 V( e
如果 runc 二进制文件不在 /usr/bin/ 目录下,需要有 Environment 参数,指定 runc 二进制文件的路径给 PATH ,否则当 k8s 启动 pod 的时候会报错 exec: "runc": executable file not found in $PATH: unknown, b  i) Z; G& G/ x

/ |4 c" x5 ^+ C: i( Y: g  O9 k; L[Unit]& ~3 G4 U  C. p. ]
Description=containerd container runtime  f2 u) R5 }( G3 i6 q( B8 ^
Documentation=https://containerd.io6 u6 P7 `7 r; n+ z+ g+ j
After=network.target: {0 s. @* C% ^4 ]2 c5 u7 K3 w
, r' n4 e' r1 I. q" K$ @+ X
[Service]
7 u8 |0 g7 @  d, c; l( r' j/ g) ^8 WEnvironment="PATH=$PATH:/approot1/k8s/bin"" l; F1 l% X- @3 |
ExecStartPre=-/sbin/modprobe overlay- `3 z9 A% i$ i1 R# {
ExecStart=/approot1/k8s/bin/containerd
( M- {6 c( i$ b1 j7 RRestart=always! U5 Y! y0 ?0 A9 x# z8 h
RestartSec=5
% W; h9 C/ a* O; LDelegate=yes$ W# E3 d' }9 o1 \, M' P5 S
KillMode=process
9 l$ F: M* G/ p* t3 yOOMScoreAdjust=-999
% ~) j, `6 V# H2 ^; Q1 uLimitNOFILE=1048576' O6 f4 e9 _+ }' ~
# Having non-zero Limit*s causes performance problems due to accounting overhead. S7 i4 N6 ~: f7 B$ n7 k1 t
# in the kernel. We recommend using cgroups to do container-local accounting.! N: o3 |0 Y) U1 q: n1 G
LimitNPROC=infinity
' C- F9 r4 c! R3 x& U  ^! WLimitCORE=infinity
& m! g2 L* B* T6 z% A1 m$ @7 [% Q
5 m1 C  Y1 J6 y# d( B# N/ ^8 [2 [[Install]0 i8 }0 Y: ~0 L6 W- \3 q
WantedBy=multi-user.target
. P* U: O9 |( Z/ ~% e4 F: m( b配置 containerd 配置文件( j2 P: |! }# J  Z$ j9 M
vim /approot1/k8s/tmp/service/config.toml0 y# P$ ^, D; c2 L- K5 ?6 m5 B
root 容器存储路径,修改成磁盘空间充足的路径1 U& `0 _& Z! Y7 E" A

) o3 F! o9 z0 D' h7 n- K3 Xbin_dir containerd 服务以及 cni 插件存储路径& G  P9 m- w$ t0 S% u

* }" T" W$ y. h0 O' f/ N9 }5 asandbox_image pause 镜像名称以及镜像tag
* i2 [7 M" j7 n7 M9 E# d. J0 n  x" U- h9 L
disabled_plugins = []
# {# A2 \) P2 Z- L& w& [imports = []
" Z  J$ Z) \. Aoom_score = 07 b5 X) W2 r, @9 l- j# U$ V, t
plugin_dir = "". B. z/ S5 ^/ L' }3 k$ e  l! z. Q( v8 g
required_plugins = []* `: E4 b" q  h5 z  X" F2 f7 h- Q
root = "/approot1/data/containerd"& j2 g* z/ Y, L  S" V8 A
state = "/run/containerd"
" b$ c/ E6 g. I$ G! @version = 2  k3 P9 K# d' m7 y6 ~5 r8 V; f

% g; H4 V+ j5 s. K, ?2 e5 [[cgroup]
7 _, T+ h% f3 Y  path = ""* p0 p# o* L3 l* t6 H) n4 e9 p0 q
; U3 D5 ~1 g: J; |8 L' F3 y
[debug]1 }) H# z& n) v9 K9 e% c# b( ^
  address = "": M7 t% z. {1 I
  format = ""1 R! H& p  |8 t4 l/ n5 y
  gid = 0
* K. ]. E1 Z# T) Z  level = ""- _1 |: S8 {! Q1 O
  uid = 0  D8 V5 y& P3 z

" `# u/ q* \5 H8 f# R6 H) ~[grpc]8 O, S1 G( D5 ?1 _/ l* |
  address = "/run/containerd/containerd.sock"
  m1 o5 B0 I0 G: u9 d  gid = 0
/ e, N: z$ Q# L8 [' y5 Q- l  max_recv_message_size = 16777216
( e# G- U' n: x7 \- F, |6 s0 [  max_send_message_size = 16777216
' m  f$ w' C7 T4 z7 x# g  tcp_address = ""( X/ T$ H$ s" O/ M
  tcp_tls_cert = """ a1 i2 ]5 I* z) X! p$ o2 f
  tcp_tls_key = ""
: f* u2 e& Y7 n  uid = 0
4 f" {+ X' C2 E$ T  V# |/ j/ a0 q$ @
8 a* J9 z# R. y& E# m$ M1 h3 b' F- K[metrics]
1 p: {) j  W. O1 D, e" E( @  address = ""
6 X4 [5 ?8 @# z5 j8 f  grpc_histogram = false: M$ \( j/ ]0 T/ W# h8 C
" D, V; A- {  |! ^9 h
[plugins]
& _: w$ ?. C+ y, R' w3 s/ L+ K) G7 I5 ?" `1 g+ l& t
  [plugins."io.containerd.gc.v1.scheduler"]
9 @9 H. V7 M/ I2 v2 l4 P7 ^4 b    deletion_threshold = 06 n- h; F' B# k! u. U. e
    mutation_threshold = 100- z* d% w4 a# P* i
    pause_threshold = 0.02
* I, ^" ?1 h# m/ K) F    schedule_delay = "0s"
/ f# d" C, i3 ^4 Q    startup_delay = "100ms"0 s" ^) V& D% q- a. O6 V4 x& F

% ~6 J8 d, u) x8 d4 B" D  [plugins."io.containerd.grpc.v1.cri"]( J7 R' k3 y9 H: y1 Q/ P# v$ `
    disable_apparmor = false
' E$ E9 P5 X1 z    disable_cgroup = false
/ y! g: u' a+ B5 j    disable_hugetlb_controller = true
4 n: h) E7 ]) i. q& c) q    disable_proc_mount = false9 D3 i, E( o; P5 {5 }' d* s
    disable_tcp_service = true
! H: O# F6 V; P/ [) d& c, d. D9 i    enable_selinux = false
2 n* k8 c4 n6 x1 q9 c    enable_tls_streaming = false
* I* v! v5 _6 u2 c; |* y  w( R    ignore_image_defined_volumes = false
7 m7 V+ Y8 ]6 N    max_concurrent_downloads = 3& `4 ?) W. p3 U) L. n# z
    max_container_log_line_size = 16384) }) y  C! x3 e3 W+ ?% a8 m% [
    netns_mounts_under_state_dir = false4 _/ _- E# j! j# n' W3 y/ f
    restrict_oom_score_adj = false
, W7 S- D6 X8 }$ I$ ^$ ~% _    sandbox_image = "k8s.gcr.io/pause:3.6"
, m; |  }- ^" t1 I* t; z    selinux_category_range = 1024- Y& _% Y" R; k. j7 S
    stats_collect_period = 103 m, F: a# P7 E% M
    stream_idle_timeout = "4h0m0s"
! A" X7 [' {6 f1 P- [    stream_server_address = "127.0.0.1"; g) L- H. K/ r- x$ v/ H0 M
    stream_server_port = "0"' S: B' E* t- Y) n; Y! a# J
    systemd_cgroup = false
6 H* m$ n6 h( L7 S" D! s    tolerate_missing_hugetlb_controller = true
& {4 M+ @& j, U6 e3 z    unset_seccomp_profile = ""
2 G% y% o3 l4 b
- C& K8 J5 }; I- u! k( G' v    [plugins."io.containerd.grpc.v1.cri".cni]8 w# G! j+ b. h; Y* p2 G. v/ D
      bin_dir = "/approot1/k8s/bin"! X6 q9 @7 m1 `& m2 J9 p; S
      conf_dir = "/etc/cni/net.d"
! o# M6 H/ F1 `3 B$ Y      conf_template = "/etc/cni/net.d/cni-default.conf"
) p' U% E( q, k: J      max_conf_num = 11 g  }. d( S, k' H5 }

! p+ s( t9 ]5 A    [plugins."io.containerd.grpc.v1.cri".containerd]5 u, j- v" S% c* c/ K* S; E$ U8 v
      default_runtime_name = "runc"1 B  ]' f' e% u  Z8 D
      disable_snapshot_annotations = true3 F* G9 H) H$ t4 t) Q( V
      discard_unpacked_layers = false
$ ^! z3 |# `# v7 z      no_pivot = false
0 r: ^/ H8 J! J$ o! t6 F      snapshotter = "overlayfs"
4 [# v' v: D% ], m+ J- I6 m! `( `/ c# m5 r" V$ @+ B
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]6 U* q0 O2 \; z7 C
        base_runtime_spec = ""
" s4 c( v( u8 B' @! u; S0 X        container_annotations = []
( {5 Y; Q  M) {7 o2 Q+ {1 Z        pod_annotations = []# G" \6 x( B" A7 A
        privileged_without_host_devices = false
# R% A' Y6 H. L2 F, H! l+ d9 f, ^        runtime_engine = "": O5 D8 w4 r. y. M, J( F
        runtime_root = ""5 ?$ X, U/ C. i% b$ c/ m4 h
        runtime_type = ""6 I3 n* L) u, }1 J; a% X3 N
, r6 T3 J' ?, _' [  A
        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]+ e1 S8 V$ S% m! X( p, g
7 H, p1 E6 e* V# r8 q' {
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
$ H9 K# w( C) }& T  [. M2 M. y9 `0 h/ P4 x8 m  O) c
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
& K" y4 ?+ s+ B- \' e5 A          base_runtime_spec = ""
, |9 g* M' c* U1 b          container_annotations = []3 ?) o" I6 Q1 a* P
          pod_annotations = []
. u  F( t. c: b# o9 k/ L& s/ \1 L" E          privileged_without_host_devices = false
" z* u0 b' ?+ Y6 q/ q+ M6 {' F          runtime_engine = ""7 n, e* c& k6 u8 ?& z7 J
          runtime_root = ""
# F# B+ t6 ?+ r5 e: Y8 z          runtime_type = "io.containerd.runc.v2"& f) ?' o7 R; m: z2 y

! K' g$ U1 r: @) Y6 W          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
( V$ s: v. W7 e+ P0 i& X            BinaryName = ""
1 A/ Q+ I5 N8 C9 w8 Z* Z5 [            CriuImagePath = ""
2 h& }5 z* y# f$ i( |. }            CriuPath = ""
* B0 ~- B" T# |& {; h) Y" ?" y: \' x            CriuWorkPath = ""
" C4 O- ~& P+ A+ }# k0 F            IoGid = 00 [5 d8 E* x3 m. b- T
            IoUid = 0# I( G0 Q& N( n" N9 n; Z" h
            NoNewKeyring = false
. t  C6 a2 E) ?/ J+ E: B            NoPivotRoot = false
+ G9 \, h4 }( P) W            Root = ""
% s  r% k2 x+ m" w$ [            ShimCgroup = ""9 a" w7 g9 l: E+ c9 m6 _
            SystemdCgroup = true* e1 i4 R& U3 R- N. D

# {+ T4 h: Y' Y0 T& c& L6 s      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
* L4 G) q2 t2 p. U* e        base_runtime_spec = ""
( t1 S+ ?3 |5 g2 M# C+ Q& U% [8 C        container_annotations = []9 m3 D' I- v8 ]
        pod_annotations = []( M. ~6 ?3 j: W7 H0 D4 P9 T8 p* D
        privileged_without_host_devices = false
  @" ^/ p$ x5 A2 R  E/ t9 y3 ]* G        runtime_engine = ""
! T2 w5 F5 _5 N* I7 ?        runtime_root = ""
5 O6 d8 a5 A. K7 u/ s4 J# U        runtime_type = ""7 O1 P( r! A' G, g" w
2 G5 m; n8 S: e: }' r1 C7 M! G
        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
& Z; A2 D9 q. i: \9 }# E% D% I# T  [- g
    [plugins."io.containerd.grpc.v1.cri".image_decryption]
5 m$ P# ^* |6 U) a1 y7 ^& }      key_model = "node"8 @: E6 q/ s* T/ g$ X
. ?( M, L7 ^4 Z4 b& }/ f/ N8 ^7 O
    [plugins."io.containerd.grpc.v1.cri".registry]# L$ \. x- J: M8 o6 y8 h1 w) n) B
      config_path = ""5 g5 m) }+ `6 a( Q* l  x

0 C' w8 `- @! U) o) M. a1 d      [plugins."io.containerd.grpc.v1.cri".registry.auths]
* j; x0 J& A/ l# h# F& ?' D0 }2 R- a, ]3 D, C8 L1 n
      [plugins."io.containerd.grpc.v1.cri".registry.configs]# o) {; l( A: ~: f0 b
! g8 q; i! q7 ]+ {* _/ H
      [plugins."io.containerd.grpc.v1.cri".registry.headers]
. p- }" L) I& s! e6 ^: |( t
0 l4 v7 x0 s6 h5 D; v5 I' e      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
1 G, }) Y# m2 {) Y8 P        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
9 l) i) }: Y7 U: f          endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
1 t4 i- |- N# H, s9 O0 @) D        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]; c9 U4 X& f: A
          endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
! C- c. I9 S9 U2 y        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
) h$ z, Q0 ?; O) t; z          endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]/ r" N3 o$ I; b, q
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]$ c% `. Q) v) V# P8 M: y
          endpoint = ["https://quay.mirrors.ustc.edu.cn"]
4 ~8 I6 ?! C" |, @
5 r$ [2 D; u3 x( o0 Q/ u8 n    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]2 `# x/ j1 Z' A) H/ j$ g
      tls_cert_file = ""
6 N$ [/ @/ x0 Z      tls_key_file = ""2 t! E; Z* H* b* B# M7 f# M

* r- z0 x7 {- r! {% T  [plugins."io.containerd.internal.v1.opt"]4 D$ u, R+ K* H8 e  ?
    path = "/opt/containerd"! H. \1 b; S- G# b* [6 W
1 e: K) c/ N" J' [. B
  [plugins."io.containerd.internal.v1.restart"]0 Z0 U) T- M( w9 V6 r
    interval = "10s"  c) _# w4 j6 X
0 \, ]4 L9 d3 V; G
  [plugins."io.containerd.metadata.v1.bolt"]2 B7 D8 \/ m+ s6 {" E! H- U
    content_sharing_policy = "shared"8 U& t  S% q4 w$ c2 ?6 j- t9 m
! b" K, b3 I8 F' M8 D# T
  [plugins."io.containerd.monitor.v1.cgroups"]
( K, \, k* {5 _5 N2 n    no_prometheus = false/ V5 h, Y* a2 i2 Q. D
0 e) r, [8 W2 W1 D/ y# M* a
  [plugins."io.containerd.runtime.v1.linux"]+ Y+ ?) p0 M; R0 T
    no_shim = false: {9 R; _) t4 }" G
    runtime = "runc"0 Y( r" h$ a: V' }* t3 ^7 W( c+ r& N' i
    runtime_root = ""
, e* g; H  b8 ^& _3 j- K5 f# s    shim = "containerd-shim"
; C% k6 r8 l7 y  ]9 d    shim_debug = false, d7 g; d4 ~6 M  \3 O8 W

5 V: b7 f4 Z! X, ~" L3 q/ D  [plugins."io.containerd.runtime.v2.task"]" N% Z) q5 c4 K; j9 c; r6 w) o
    platforms = ["linux/amd64"]7 l9 V* e5 \8 w- M: ~. |

/ l; G6 \* k. y" y' h/ }  [plugins."io.containerd.service.v1.diff-service"]
& D& h5 Q2 a$ [* a/ B' b( S    default = ["walking"]
. C- O. z2 j7 Z/ `" n, T
5 M; b9 u- W4 X* g3 b  [plugins."io.containerd.snapshotter.v1.aufs"]2 a4 Z  A' m) c1 N
    root_path = ""! k4 V# j5 B1 i9 c" ^
. k! @" h' j- m" h( ^4 R% P( {5 \7 ~
  [plugins."io.containerd.snapshotter.v1.btrfs"]4 m& O7 D$ O/ t: u( A0 ^
    root_path = ""
) r% b& K1 @, m: K3 c( `1 Y
% w0 X5 w7 R  E4 f. h  [plugins."io.containerd.snapshotter.v1.devmapper"]
9 n0 V/ \" b3 j0 Y3 b% z" A# g    async_remove = false
; e8 A; B5 q$ {6 K+ @5 m; e    base_image_size = ""
; Z- S9 q- l4 \3 f/ I1 g    pool_name = ""4 G. ~1 M7 F9 v1 b) M
    root_path = "": W2 I* V. E$ Y: N9 b

5 d' @7 D- ~5 \- J3 U  [plugins."io.containerd.snapshotter.v1.native"]
. B7 U6 H- h  B! f7 h& u( {    root_path = ""
) y# }( u7 S  {3 U. i  o  C" v7 D8 |  F
  [plugins."io.containerd.snapshotter.v1.overlayfs"]' u/ m- p3 u9 r! |9 ]% s) s# b
    root_path = ""
1 r1 i1 i/ x1 X6 H/ l/ Q1 @# T- i3 A+ E) `$ U; o
  [plugins."io.containerd.snapshotter.v1.zfs"]
5 @8 I  m7 Q6 l9 \) c+ Y2 p    root_path = ""
& O( g) C3 v( W; k: a- E6 S! n- K% v3 I* B4 I
[proxy_plugins]
! ^; {% x+ k* N  H9 C1 P* u: I: w. `4 y# m, M, `
[stream_processors]4 d8 X6 {* w8 w! D) u: q

% g! R: f* s, s5 T  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]6 x6 U0 `5 E% o) C
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
$ R* d/ C  t$ w    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]0 _; Q2 R9 w7 }
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"], F! h3 J; X9 \0 y, U- k* g; O
    path = "ctd-decoder"3 a6 }, A5 M! Y5 r* |
    returns = "application/vnd.oci.image.layer.v1.tar"
# ]4 r1 `1 E/ B) L# W5 V6 W% N& J- I
' v" f+ X$ }9 j  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
8 W. V, r  @, o8 x& R2 [. b( S    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
+ L0 ~8 _2 o+ k    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]9 X& c5 v, }0 v! R5 [( [0 M
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]- y. W8 J! n* X8 Y+ `+ G& @
    path = "ctd-decoder"
2 b3 b3 W8 p  P8 a! ~! k/ r4 `% v    returns = "application/vnd.oci.image.layer.v1.tar+gzip"
" T5 j0 ^7 v/ }9 c8 ^
' |, ]8 w$ p( A, B[timeouts]
. T: N( L5 |6 f2 k0 M: Q  "io.containerd.timeout.shim.cleanup" = "5s"
) {  O, I7 C2 S# L% k  "io.containerd.timeout.shim.load" = "5s", \" ~9 J$ S3 k4 _4 F' X
  "io.containerd.timeout.shim.shutdown" = "3s"  G8 R' E" H; c
  "io.containerd.timeout.task.state" = "2s"* G* J0 e, K, C/ ^
2 f8 `. M+ x0 @; m) S  Y
[ttrpc]
/ _& y- K. x/ Y  address = ""9 F# l* i% o4 A: ]) G! \
  gid = 0* a) M+ \6 k4 B5 v
  uid = 0
& _) F0 D" p% M- X. d/ H  u7 {配置 crictl 管理工具
% ?- P8 P- W1 v; E* Pvim /approot1/k8s/tmp/service/crictl.yaml
8 B5 {+ Z* r( {, _$ Zruntime-endpoint: unix:///run/containerd/containerd.sock
5 B* [5 }2 J+ d9 u6 z1 ^! Q- p1 S1 t9 ]配置 cni 网络插件
1 Q, Q; }' V: Fvim /approot1/k8s/tmp/service/cni-default.conf
" e8 g* G0 v9 G0 s2 ssubnet 参数要和 controller-manager 的 --cluster-cidr 参数一致# G. D$ J2 t# |8 H5 E3 z: |3 G
" G  Y$ o3 D, K
{/ n, ~4 v7 q. ^4 {, Q
        "name": "mynet",) ~) d, \9 W6 S6 e" @+ a+ r
        "cniVersion": "0.3.1",
8 v2 n& L# w0 i( [4 S        "type": "bridge",
3 ]1 T8 R0 ~7 G& R- I' l6 z        "bridge": "mynet0",
/ R) r& B& [1 J2 F        "isDefaultGateway": true,9 e6 ^3 f6 {  X. {; N8 L
        "ipMasq": true,
2 r- O! z! B* v5 d5 d0 E: U) y, w        "hairpinMode": true,
! W; Z* L( y6 P7 Y        "ipam": {
0 G3 z6 G% ]# H2 F% t6 X                "type": "host-local",) c5 {4 C- g* o% C% F7 c
                "subnet": "172.20.0.0/16"
& r$ P! o$ d+ W$ M# N        }
- v7 p" u; a$ c( ]! y5 M7 I}
# L. S9 j" w. C1 b$ P分发配置文件以及创建相关路径
* f  L! ?! y/ G. R5 K! L9 vfor i in 192.168.91.19 192.168.91.20;do \
+ T$ D  ?# u$ j- Wssh $i "mkdir -p /etc/containerd"; \+ H6 |( z. \+ ?
ssh $i "mkdir -p /approot1/k8s/bin"; \9 F% u4 j# n6 _! M
ssh $i "mkdir -p /etc/cni/net.d"; \
! ?& a- s% S# [5 x- escp /approot1/k8s/tmp/service/containerd.service $i:/etc/systemd/system/; \( `* T5 N. g3 u0 `( H8 X: _: L
scp /approot1/k8s/tmp/service/config.toml $i:/etc/containerd/; \7 r, U5 m8 ~* |3 @" K+ K
scp /approot1/k8s/tmp/service/cni-default.conf $i:/etc/cni/net.d/; \2 v3 O0 ]; h9 B
scp /approot1/k8s/tmp/service/crictl.yaml $i:/etc/; \
5 c4 p! G# C% [- x1 rscp /approot1/k8s/pkg/containerd/* $i:/approot1/k8s/bin/; \
+ X) v( A0 |: @( v% bscp /approot1/k8s/pkg/runc $i:/approot1/k8s/bin/; \6 i2 v  ?  P- U6 J2 Q  v
done4 `; t8 f+ B3 z' A. H! u" c
启动 containerd 服务
7 y2 |4 B8 ?" f0 kfor i in 192.168.91.19 192.168.91.20;do \! Q/ a" m* D- E9 X. }) q
ssh $i "systemctl daemon-reload"; \0 I. {, b+ m! y/ H; R0 O; a
ssh $i "systemctl enable containerd"; \. E1 b6 l8 Q$ @3 L8 v
ssh $i "systemctl restart containerd --no-block"; \' U5 _! p7 f2 q6 O3 y$ e) i
ssh $i "systemctl is-active containerd"; \% X( R. L0 a8 a) R/ Q* t4 m
done: b, H# N: ]' t$ S6 B# ^: P! k
返回 activating 表示 containerd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active containerd";done
  U5 F' J, {) R' h9 i2 G5 K+ S6 J) \1 K# a$ z+ v# e6 q/ Z, x
返回active表示 containerd 启动成功3 G, y3 G' @) a; o3 L3 k

  g' |) {. o1 j- \4 y9 h导入 pause 镜像2 [  c+ K+ z* K2 h9 W* s
ctr 导入镜像有一个特殊的地方,如果导入的镜像想要 k8s 可以使用,需要加上 -n k8s.io 参数,而且必须是ctr -n k8s.io image import <xxx.tar> 这样的格式,如果是 ctr image import <xxx.tar> -n k8s.io 就会报错 ctr: flag provided but not defined: -n 这个操作确实有点骚气,不太适应# G7 B( x' m8 H" e6 D

; ^. m- n+ n1 T! r4 p0 L: z如果镜像导入的时候没有加上 -n k8s.io ,启动 pod 的时候 kubelet 会重新去拉取 pause 容器,如果配置的镜像仓库没有这个 tag 的镜像就会报错
/ j4 K2 y4 b4 M* i) J0 c  ]% q9 H' W# i( ~5 W) v
for i in 192.168.91.19 192.168.91.20;do \1 p' J5 C3 ?- Z8 k
scp /approot1/k8s/images/pause-v3.6.tar $i:/tmp/
( ^6 W! V$ X3 F4 @$ kssh $i "ctr -n=k8s.io image import /tmp/pause-v3.6.tar && rm -f /tmp/pause-v3.6.tar"; \) H; a2 }6 V. h
done& u6 v" w& \9 k/ M8 q
查看镜像; @  _1 o$ x* {) N+ v# {4 k. F

1 Z; b2 }/ K, pfor i in 192.168.91.19 192.168.91.20;do \+ b0 S' W, M. h: t9 @) b
ssh $i "ctr -n=k8s.io image list | grep pause"; \
/ m: p% e: L1 U- j$ ?2 e- Sdone
4 y- I" G# b0 Q5 D+ s& a部署 kubelet 组件
8 a. B1 |6 K) m1 [5 ^, _创建 kubelet 证书
5 l  D5 P' W0 u: x( M! J# bvim /approot1/k8s/tmp/ssl/kubelet-csr.json.192.168.91.19* U/ ]* u! [$ z1 [- d* _: Z
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个json文件,json文件内的 ip 也要修改为 work 节点的 ip,别重复了
1 o" C; q6 r9 R
; h0 a0 n/ `% B8 F" m( x{! \2 R3 O9 t3 c5 k( D) A7 D
    "CN": "system:node:192.168.91.19",! e! k- g6 ]+ `8 O9 |. ~- q* z
    "key": {, z7 v8 `) p3 o0 D
        "algo": "rsa",% [. u4 k0 C: L- b* G
        "size": 2048' U1 }; s8 h2 J
    },& [. r. r, z# S4 ?  \
    "hosts": [# I( z: x- c) z8 r8 {6 r( c
      "127.0.0.1",4 B% p4 \; C4 D% C
      "192.168.91.19"# V$ m3 c0 _  c/ ]% R- i2 Y+ q. `/ o
    ],
# v% b, ]# m- E    "names": [
2 T2 m8 V+ ]( s' a* g1 N# @/ L: y/ {      {
+ T  x5 N% J3 w* @        "C": "CN",, E# x& L, G! y( I  l! d
        "ST": "ShangHai",; E9 D7 t8 \- H  i* D7 x* {
        "L": "ShangHai",
# x  R$ `% R2 q' B        "O": "system:nodes",% E) B  `& x# [0 K$ {4 e
        "OU": "System"8 v3 @2 I% E: y5 {: Y/ [
      }
2 u3 C' w: X8 q8 r6 e" F( j    ]
( f/ {* a  C- k8 k% H3 n" v# J) a}
$ G7 g& b9 X5 e  P  T" afor i in 192.168.91.19 192.168.91.20;do \3 M( {, i% j9 u- C
cd /approot1/k8s/tmp/ssl/; \: V+ ?2 T; |/ B3 [+ _
cfssl gencert -ca=ca.pem \
- A3 T8 L$ t, c" l- [7 K* A-ca-key=ca-key.pem \/ `2 S  J9 y& F6 G
-config=ca-config.json \
7 {/ s- X% R  f. W; ?+ S-profile=kubernetes kubelet-csr.json.$i | cfssljson -bare kubelet.$i; \$ Y# I3 ]3 [. t8 z& I$ ]9 p7 h
done+ ~; Z9 i4 K% M0 W, R( x6 B4 T& Y
创建 kubeconfig 证书  M" L+ R8 q; \; g) \& Y9 b; X
设置集群参数! F( L- P6 n9 Q+ x7 z& K% B0 j. }' a
: N- X. B6 G( C
--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver5 G' L3 [. o, L# m/ U4 G7 Q; T# J

* Y- S9 j* e0 S, {& N2 B& [for i in 192.168.91.19 192.168.91.20;do \& E) h) b; l4 a$ l/ q9 ^) o
cd /approot1/k8s/tmp/ssl/; \! [8 h5 Z: d+ r- Q1 i7 ^
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
7 R5 Z5 r- }$ E/ s4 s6 g--certificate-authority=ca.pem \; W: D9 l8 J6 r5 I8 K; H
--embed-certs=true \
, a; B; C4 c& B--server=https://192.168.91.19:6443 \
) ~9 S) |1 T% G8 h--kubeconfig=kubelet.kubeconfig.$i; \
- ?, n/ G0 y3 U6 l/ N. m0 L* o- Ldone5 U7 ?8 ?0 L4 b2 h) Z3 o
设置客户端认证参数. \8 d3 E" r& ~5 K4 U
: ]& O6 p; Y' P8 a6 E, y0 N
for i in 192.168.91.19 192.168.91.20;do \  |7 e( q. s- w7 _0 k$ T1 G
cd /approot1/k8s/tmp/ssl/; \
7 X8 Y. ~0 j2 N) I6 D  h/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:node:$i \( E/ Z9 V" e0 |/ k
--client-certificate=kubelet.$i.pem \
! T/ p+ V; a/ Z( s; H--client-key=kubelet.$i-key.pem \/ {$ x$ ^% a4 I( N/ T5 h/ K
--embed-certs=true \: H# U! C5 E+ i; E. h
--kubeconfig=kubelet.kubeconfig.$i; \
* Y6 A2 u8 a/ g4 L% u$ mdone
& g  N/ L5 f, t7 _& Z. i设置上下文参数
) D9 w1 H% j# t
. ?0 V' x2 J/ X7 g6 I: ifor i in 192.168.91.19 192.168.91.20;do \& }; O' f  L% W3 |
cd /approot1/k8s/tmp/ssl/; \% k" f  B$ R, {' j6 E( E* o& `
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
( u2 ?! \1 d- k4 q# z' \--cluster=kubernetes \: ^$ [( q- N8 g2 Q( N) S5 ]
--user=system:node:$i \
4 c3 U! y  s( r$ u, G--kubeconfig=kubelet.kubeconfig.$i; \
3 ~* Z3 e8 h$ S! R- {5 }- q, G; ydone
3 ?1 e+ b5 E: O设置默认上下文* u5 _7 x7 H, J

- \! _) g/ Q1 v6 O, s$ Yfor i in 192.168.91.19 192.168.91.20;do \
+ L9 C: @% R! j  x- ~  k& ocd /approot1/k8s/tmp/ssl/; \5 g. v4 I) Y. F6 y
/approot1/k8s/pkg/kubernetes/bin/kubectl config \7 ?, ^* N2 M! H% [  X; W" H
use-context default \
; V* q6 C4 ?: z+ w. M7 r# b, ~--kubeconfig=kubelet.kubeconfig.$i; \
3 S* ^) q1 O7 d8 v) s8 Y' Ddone" K9 w9 F" o- s' W; F
配置 kubelet 配置文件
9 v. @( U, `/ k+ h/ R) zvim /approot1/k8s/tmp/service/config.yaml0 ~4 h( h6 r/ }$ t
clusterDNS 参数的 ip 注意修改,和 apiserver 的 --service-cluster-ip-range 参数一个网段,和 k8s 服务 ip 要不一样,一般 k8s 服务的 ip 取网段第一个ip, clusterdns 选网段的第二个ip
5 ]/ a) E/ @; u3 r. l- V
! k; ~8 Z9 [1 O1 v6 F0 {kind: KubeletConfiguration# K# f( t) [" u) n, B
apiVersion: kubelet.config.k8s.io/v1beta1
% c- L/ x' @0 s0 ~address: 0.0.0.0: b5 p( j' I5 \: U$ k
authentication:5 }  r' M+ C8 `+ e9 n
  anonymous:1 k8 f3 ^" M6 V5 y8 A
    enabled: false( c# g) U0 P0 i3 C- W
  webhook:
7 k7 d$ E1 l3 c( `/ R6 T3 R    cacheTTL: 2m0s) I9 C' ^- G: ~, V' u
    enabled: true
8 ~. M. R8 S+ l+ c0 H8 y' J  x509:
. R6 x! x& _1 z% m$ N7 Y* L$ ^; R    clientCAFile: /etc/kubernetes/ssl/ca.pem6 Z5 Q0 n: q, E( G$ ^% C
authorization:" g, ?# p: E& t$ V
  mode: Webhook
; `0 Q1 L% I6 H/ @& K  webhook:
: f9 g) H  t  r4 u6 _& @    cacheAuthorizedTTL: 5m0s
5 D4 m, k$ p3 }% }5 y: k    cacheUnauthorizedTTL: 30s
7 n5 T, d' Q0 r% |cgroupDriver: systemd$ v% M7 V  Q, ^. R/ T) [& x
cgroupsPerQOS: true& |! C) k: }6 d: l& k8 W% d2 U( h
clusterDNS:$ J  Z7 E6 @7 X# t! P) z4 ~
- 10.88.0.23 ?+ \9 p( |& @2 Y/ L
clusterDomain: cluster.local
' T. W3 ]" X: WconfigMapAndSecretChangeDetectionStrategy: Watch, K" A: @9 j5 E
containerLogMaxFiles: 34 [5 w) L% q4 ^  L! F% H
containerLogMaxSize: 10Mi
8 D9 \; K' ?9 e) k( Z: B6 M$ [enforceNodeAllocatable:
+ b! n- Z% M6 u; V& k/ }- pods5 O3 l. G1 o7 W, x4 X# j+ j+ l+ |
eventBurst: 10
* d9 M; y+ j$ r% z& \eventRecordQPS: 5
7 m$ l" }! r% E6 o$ J) F4 Z3 uevictionHard:( X8 H' W% ]" ]. s/ ^4 J" W
  imagefs.available: 15%
; w  ~. R* C* b8 X3 R; A+ j  memory.available: 300Mi' G4 V& a& z( e4 D' H6 Y8 {
  nodefs.available: 10%
4 W8 T' q3 g: O3 D. L! I. ]  nodefs.inodesFree: 5%
2 F2 p  V5 v+ Q; w# revictionPressureTransitionPeriod: 5m0s9 _* f: f/ G( U4 E( [
failSwapOn: true
3 ?: x! d& p2 @fileCheckFrequency: 40s& K0 [+ R4 L+ G/ w0 Y" d) A: |/ o& v
hairpinMode: hairpin-veth: D! ^6 O, |: m, J2 x0 }# Q  K5 T
healthzBindAddress: 0.0.0.0  [5 q8 w% F% g) X1 J* U
healthzPort: 10248
% Y3 X. \: k, ihttpCheckFrequency: 40s, {. I3 w" P& v& U5 {& F1 l
imageGCHighThresholdPercent: 85" b% y' {. H9 W( Z- I3 z
imageGCLowThresholdPercent: 80" j( x2 ~* e" B8 P0 e1 Q$ h% |
imageMinimumGCAge: 2m0s
: y1 M5 X. V% L( y9 q& CkubeAPIBurst: 100
0 q+ M1 P) C3 i! u7 ^kubeAPIQPS: 50
! H) [/ }& D7 o7 C/ E1 K, nmakeIPTablesUtilChains: true
5 _8 q9 X6 }! kmaxOpenFiles: 1000000
! \4 i* S; a( ~) ^9 q1 _maxPods: 1105 @* K6 }$ {$ j' K6 D: y' U6 S
nodeLeaseDurationSeconds: 40
3 C1 ]# a, A* c! x2 ?' t& _! gnodeStatusReportFrequency: 1m0s
- a& ~0 y- J: X" ]+ g' mnodeStatusUpdateFrequency: 10s
3 N. ^  k) W5 y! p+ |$ `4 x* M+ }oomScoreAdj: -999( ?+ l# `2 l+ i
podPidsLimit: -1; R) y% B, ~4 q+ {6 H
port: 10250
) Y4 o- Q) h0 S2 t# disable readOnlyPort- e( J, i" V& ^& M7 {' u) r
readOnlyPort: 00 i- @, h- k& N) B! T
resolvConf: /etc/resolv.conf
5 Y) k1 C( @7 D2 IruntimeRequestTimeout: 2m0s
. n# F& T" Z/ d* C0 hserializeImagePulls: true$ D/ B7 E; ]5 T& B/ U- |7 m
streamingConnectionIdleTimeout: 4h0m0s; r( \7 L1 u9 r$ ]" ~/ d2 j
syncFrequency: 1m0s# M' M9 P) T' m9 T' ?
tlsCertFile: /etc/kubernetes/ssl/kubelet.pem
# [7 d  z: C  S$ MtlsPrivateKeyFile: /etc/kubernetes/ssl/kubelet-key.pem
1 T+ h5 v! C% }) N配置 kubelet 为 systemctl 管理
' G1 n4 M9 R: V- d2 P$ {6 J5 h2 ivim /approot1/k8s/tmp/service/kubelet.service.192.168.91.19" P. S$ V7 m5 ~4 q
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了
' Y1 ]2 Z# Q" J0 t: |
8 U! N! Y, O. m- q--container-runtime 参数默认是 docker ,如果使用 docker 以外的,需要配置为 remote ,并且要配置 --container-runtime-endpoint 参数来指定 sock 文件的路径
, r8 v6 J; V$ e# u
4 M' b4 K& [( G2 r' Bkubelet 参数
' e2 |% Y. ^5 h. N+ U) g7 x. q& q7 P" `" R5 K& o6 |5 i
[Unit]6 q# a& r+ J/ _9 g& U% ]  n+ y
Description=Kubernetes Kubelet8 I8 ~5 ?( ^# K4 U: h
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
! I+ ^* S" \' ~7 H
$ z$ i- x3 U3 b3 x6 f4 Z/ H[Service]; @0 H' x* `8 w6 c; Z7 |
WorkingDirectory=/approot1/k8s/data/kubelet' L' B. I) Q0 x% H& b( y0 ?
ExecStart=/approot1/k8s/bin/kubelet \! B  ]# e3 l9 h, z
  --config=/approot1/k8s/data/kubelet/config.yaml \1 a; A1 M1 C  V, |
  --cni-bin-dir=/approot1/k8s/bin \* c: K7 h* B  H. o, }; @" ]. P+ `
  --cni-conf-dir=/etc/cni/net.d \
8 r4 ~. \2 o+ {5 ?% {$ t; Z) @4 R  --container-runtime=remote \
( M3 S* V+ y6 C; L1 w( C  --container-runtime-endpoint=unix:///run/containerd/containerd.sock \7 X/ R, y( h5 ^, y' A% C8 h# H0 R) S
  --hostname-override=192.168.91.19 \$ I# x; w6 Z6 w& R$ r
  --image-pull-progress-deadline=5m \1 {* j! X, [* c* O% C
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \' J) f7 [  I/ y# v5 f/ H
  --network-plugin=cni \
0 x: H3 ?2 }8 Y( Q, [  --pod-infra-container-image=k8s.gcr.io/pause:3.6 \
3 d0 C1 [8 w8 f# L' ]  --root-dir=/approot1/k8s/data/kubelet \  \% `, i4 k8 `* V) D  A8 j
  --v=20 F6 x0 h, o/ [" h+ k/ P( a$ m
Restart=always5 m% B9 r' D# o
RestartSec=5
/ {  n5 z* t6 Z4 m( U1 @9 u0 r
' q& W; g5 O7 R: u/ N) T, z[Install]! E$ ~) D2 x1 S* N6 p7 H
WantedBy=multi-user.target1 v% [5 \0 G# r% _
分发证书以及创建相关路径
0 h$ y- l8 s* ~+ @& r- K如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
7 p% u# v" ?7 v# W3 k8 v- h: J
$ d* I+ H0 m3 g) U  y. v0 i对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败* z' A1 n6 M4 u9 `
4 C; N8 ]! |1 e4 `
for i in 192.168.91.19 192.168.91.20;do \2 y) i( f+ P" h0 `0 p
ssh $i "mkdir -p /approot1/k8s/data/kubelet"; \
' c% m$ W; L4 f0 n, Essh $i "mkdir -p /approot1/k8s/bin"; \
; }7 d1 ~- E3 ^+ ]3 nssh $i "mkdir -p /etc/kubernetes/ssl"; \
( @' O1 T( c1 }$ }! E% Xscp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
! i' k- P" T) @scp /approot1/k8s/tmp/ssl/kubelet.$i.pem $i:/etc/kubernetes/ssl/kubelet.pem; \
1 x+ y1 X( }, E% ~scp /approot1/k8s/tmp/ssl/kubelet.$i-key.pem $i:/etc/kubernetes/ssl/kubelet-key.pem; \
0 p, ]- }/ v* f6 O0 oscp /approot1/k8s/tmp/ssl/kubelet.kubeconfig.$i $i:/etc/kubernetes/kubelet.kubeconfig; \
9 i+ Q+ z- N. f6 z' W% v% hscp /approot1/k8s/tmp/service/kubelet.service.$i $i:/etc/systemd/system/kubelet.service; \! d1 w- l* V# ~7 I$ ~
scp /approot1/k8s/tmp/service/config.yaml $i:/approot1/k8s/data/kubelet/; \) E6 |  ~/ }! _* P
scp /approot1/k8s/pkg/kubernetes/bin/kubelet $i:/approot1/k8s/bin/; \# f' J$ H: Y, B* N
done- W6 u" h# @1 o' ^2 B3 R6 A
启动 kubelet 服务9 [  j5 F+ s4 L" _, a* i
for i in 192.168.91.19 192.168.91.20;do \( v& Z& }- q2 X0 q
ssh $i "systemctl daemon-reload"; \7 P* b& X3 ?* l; T; P2 e/ F2 l  d; c
ssh $i "systemctl enable kubelet"; \
( n) u' H4 T2 F( ~) S8 d; yssh $i "systemctl restart kubelet --no-block"; \
6 \* f. N0 A) U; E1 a5 [+ f4 }ssh $i "systemctl is-active kubelet"; \$ \  D9 S# `! b! p
done/ Z5 ?9 x& l& e) j9 l
返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done
" r9 }( m3 r7 f9 O6 m1 w0 R' ^4 {' R' E
返回active表示 kubelet 启动成功  ~) _. ^1 Z6 _* z$ e- r- l
: N% c5 U" C$ V6 [# s" h! M+ [! |0 a5 _
查看节点是否 Ready- P0 u; Z& Q& m2 G9 z' \8 r! Y) ~# k1 N
kubectl get node
2 w. ^: ?( D( _  F7 c预期出现类似如下输出,STATUS 字段为 Ready 表示节点正常- W: e) S* y/ T, [+ E
) d8 R5 b& g' k2 k
NAME            STATUS   ROLES    AGE   VERSION
6 a+ ?) Z) {8 f* y- g192.168.91.19   Ready    <none>   20m   v1.23.3
/ y' K4 w9 X4 g5 A8 e0 X8 A6 p192.168.91.20   Ready    <none>   20m   v1.23.3
. \, e& D* Z7 K4 A2 z6 x- y5 Z2 H1 ^部署 proxy 组件3 Y1 p1 Y$ Y, f' o: p
创建 proxy 证书
' G- G- U: X# ]: \) w) m' [# rvim /approot1/k8s/tmp/ssl/kube-proxy-csr.json% s+ D- l& p" N- T( O- y
{5 L5 \7 C) p# Q  f4 g
    "CN": "system:kube-proxy",% l0 e7 I! H' y5 W
    "key": {
/ W/ V0 ^: T3 b( E8 ~        "algo": "rsa",
. b  l" \& u' f$ C        "size": 2048! f: S7 [7 w+ C/ T6 o
    },
* h3 t% s2 {. w7 O" ]$ o    "hosts": [],
: \. v! x: }$ w5 d4 W. a4 \    "names": [
* a" C& Z6 e, z% [& n) {      {9 @& }7 C, h% W
        "C": "CN",3 Y* F0 b* y$ s2 d/ s8 x
        "ST": "ShangHai",
& p. P! n7 |- [8 X4 {! m4 h        "L": "ShangHai",
- x: N) C+ F! Z" a: C* N        "O": "system:kube-proxy",
' `; E1 W+ b6 |/ s0 _0 c        "OU": "System"
3 g! r; M* ?- K$ T      }
) O9 x2 ~: r: u6 I$ ^    ]! S# r6 K% N# C& Z( h' p
}
& j% V* R' w; P  r# T; mcd /approot1/k8s/tmp/ssl/; \! j$ f# A; s. ?& T4 S
cfssl gencert -ca=ca.pem \
1 I9 D' b, S/ I-ca-key=ca-key.pem \1 Z; u0 |: e! N
-config=ca-config.json \& \; L3 E- X$ v  w: L
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy' z1 b; J) B" e
创建 kubeconfig 证书2 O4 b: D. F8 H/ z
设置集群参数
" b9 i  H; f: H! j2 G# m- T+ o$ E5 t- q
--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
+ n, c0 N: A* g# k! }, r1 d
- [  J5 e# w2 ecd /approot1/k8s/tmp/ssl/
, Y! D5 V3 R  ~( ^/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \2 \6 ?  P( X/ V2 u* N! A
--certificate-authority=ca.pem \
* a  Z* h0 z. ]* P- V--embed-certs=true \& K, B$ R5 ~" `
--server=https://192.168.91.19:6443 \
; }7 t& s- m; ]$ x--kubeconfig=kube-proxy.kubeconfig
2 S, Z1 Z- u( f3 k设置客户端认证参数, t" D1 M" L3 ]& j- ?, h8 L- g

! F1 m. {' c; Q6 _& ]2 P- C7 Acd /approot1/k8s/tmp/ssl/5 J- D' `( q1 X* F
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials kube-proxy \
$ s0 a- M/ ?4 b; \--client-certificate=kube-proxy.pem \2 ?1 X7 C7 v) V3 w+ Z
--client-key=kube-proxy-key.pem \
( K1 i! x& p+ `--embed-certs=true \- m* r- L4 B+ Y
--kubeconfig=kube-proxy.kubeconfig
8 F4 s- U5 w- u( K设置上下文参数/ Y5 q6 q8 a" Z2 g: M3 f& |+ j
! H& f- ~, S, Y% R8 ]6 C2 D
cd /approot1/k8s/tmp/ssl/5 Q: A! g, \% A. R- r8 E
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \7 U) g5 Q* o1 R% _% D' V/ o
--cluster=kubernetes \
9 C1 O8 g+ l& b3 v7 {. g; y--user=kube-proxy \
. b5 u3 |% E4 G; ]# U1 Q9 [% p--kubeconfig=kube-proxy.kubeconfig3 Z& c/ l; {. ^
设置默认上下文/ B! D" J* L1 m% b, T, T4 i% R" T8 J, P

" K! z9 Q$ c2 ncd /approot1/k8s/tmp/ssl/
+ s7 H  ?: A; K; X+ @* g/approot1/k8s/pkg/kubernetes/bin/kubectl config \) i4 C7 ?' t4 t
use-context default \
& \6 j) E3 c6 ]3 E8 t--kubeconfig=kube-proxy.kubeconfig
* {. |2 g& V! k- n配置 kube-proxy 配置文件$ c5 N* ^6 x% ?7 ]
vim /approot1/k8s/tmp/service/kube-proxy-config.yaml.192.168.91.194 }' f) z1 h7 ^
这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了1 Z+ E, ^# |9 @$ d7 q6 H0 P
; K+ K* S2 r0 f6 p/ \. s% l- `- V
clusterCIDR 参数要和 controller-manager 的 --cluster-cidr 参数一致
+ Y& d9 a! E, X5 n! {$ o: d1 ^& V$ I$ t- O
hostnameOverride 要和 kubelet 的 --hostname-override 参数一致,否则会出现 node not found 的报错
) W$ v: Z4 p0 M+ u  Q2 s5 _( n( P# y" f( z" J1 E/ W/ N
kind: KubeProxyConfiguration; e" g* Y6 K$ |
apiVersion: kubeproxy.config.k8s.io/v1alpha1
7 |$ D3 `( l7 C- EbindAddress: 0.0.0.0
% D6 k" N  E8 b5 ?clientConnection:
8 F5 s" I# S) H. K4 k" U4 l: E  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"' K, w; I2 J1 j  D" }
clusterCIDR: "172.20.0.0/16"6 n/ D# R, Y* c" K! C% u
conntrack:, H6 \) i; e9 x' M' y
  maxPerCore: 32768
- T5 A" s! x/ @& P  min: 131072' j6 I9 z7 T; W( k: L) Z
  tcpCloseWaitTimeout: 1h0m0s
9 A% T& V4 \% {" Z: z/ J  tcpEstablishedTimeout: 24h0m0s! u$ q0 A- U. X; ?+ \* b, S. G
healthzBindAddress: 0.0.0.0:102568 u4 {- M4 C" |- S% v
hostnameOverride: "192.168.91.19"
, U1 A7 e9 e' ]- D0 l: |metricsBindAddress: 0.0.0.0:102498 M* B/ m$ E9 r# e6 Z" [
mode: "ipvs"+ ~9 l1 ~# Y: S- t6 i
配置 proxy 为 systemctl 管理
; `+ G& U# v( C8 Wvim /approot1/k8s/tmp/service/kube-proxy.service4 F8 h/ [6 s, b1 x/ W3 n: m
[Unit]
6 w( |% V; A; W' H3 L, J, ]Description=Kubernetes Kube-Proxy Server
' U* b+ X  z7 u& RDocumentation=https://github.com/GoogleCloudPlatform/kubernetes5 B: u' Q, }, d8 O: `4 n
After=network.target; a  n: A8 f& J7 f( }

2 H3 I1 ^% \. l. [5 B2 r$ R5 s8 K, D[Service]# B1 t  i. {/ i. q8 L
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量
4 k& m  A; \! w0 }7 _# ?# i( z: {+ a- s## 指定 --cluster-cidr 或 --masquerade-all 选项后9 C3 y: m- }/ g' D0 A
## kube-proxy 会对访问 Service IP 的请求做 SNAT. J2 Q6 v: X7 z' s
WorkingDirectory=/approot1/k8s/data/kube-proxy: u+ C" X( @! `1 }2 b5 O+ }2 l
ExecStart=/approot1/k8s/bin/kube-proxy \
! g0 n. \. l: H# i/ s4 i# J+ L' A  --config=/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml5 O+ p# B' X- o6 ^) K& a2 u, i
Restart=always
. n# D* y" Y' t6 q* f7 j  l5 QRestartSec=5
& n0 J: `+ t1 FLimitNOFILE=65536+ k3 L& f' p, h3 o0 ?
  w+ ]1 _7 T% `. Z/ J4 n
[Install]  ^; Z- ~9 s* r/ J# S" R! o
WantedBy=multi-user.target1 ?) N' T7 p6 t4 \; e1 H, ~. J
分发证书以及创建相关路径* B( }2 P9 ]- [* i* a- D) V
如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
* P/ ?6 W7 D, y# c* |
( G) t+ Q/ J" `1 Y对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败
. [% B% @- R( \, E. `5 y: g! ]- _) O2 Y
for i in 192.168.91.19 192.168.91.20;do \/ E* N0 `! ?6 |, X
ssh $i "mkdir -p /approot1/k8s/data//kube-proxy"; \% i. i/ z% {- Q) V
ssh $i "mkdir -p /approot1/k8s/bin"; \; y% B# r# }0 c0 k" x0 @3 R* j
ssh $i "mkdir -p /etc/kubernetes/ssl"; \: e- P4 e7 @* g  m; K
scp /approot1/k8s/tmp/ssl/kube-proxy.kubeconfig $i:/etc/kubernetes/; \
& {/ c2 ?; l; C/ w0 T7 |scp /approot1/k8s/tmp/service/kube-proxy.service $i:/etc/systemd/system/; \! ^  P5 p3 a1 y
scp /approot1/k8s/tmp/service/kube-proxy-config.yaml.$i $i:/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml; \: p# u6 f% Y# G
scp /approot1/k8s/pkg/kubernetes/bin/kube-proxy $i:/approot1/k8s/bin/; \
1 B$ y' C+ K2 b% [done
1 v# {1 |- @2 [2 g启动 kube-proxy 服务/ v2 E6 |5 d7 L$ H: b$ ]4 _4 F
for i in 192.168.91.19 192.168.91.20;do \
- w  c  m1 g9 q  ^. W2 V# l2 A. ~ssh $i "systemctl daemon-reload"; \$ b+ y3 k9 P; f% V- r( f7 l! D. k
ssh $i "systemctl enable kube-proxy"; \6 ^. Z4 b+ d3 u/ {* G& q9 Z' Z) c
ssh $i "systemctl restart kube-proxy --no-block"; \7 w/ r3 i) [  r  C2 {
ssh $i "systemctl is-active kube-proxy"; \+ j* O1 o9 x' z$ V
done6 v& u4 p: r. Q' M0 Q, z; g( v4 ]
返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done
- r( h) H6 S7 y7 i
" M8 @1 W# s% f9 B0 v) D: J: C返回active表示 kubelet 启动成功) W' r" V" m+ ]0 m# G6 K
, V' [& G+ w! E$ n$ C) H, G; \
部署 flannel 组件
; L+ \" J8 W  K" _3 i4 Sflannel github5 E) k' Z: c! T+ v( X2 ~5 o

1 C) _1 [- Z4 G; c* ?: h+ t配置 flannel yaml 文件
% r  H! p$ R- G! o3 K- svim /approot1/k8s/tmp/service/flannel.yaml- y0 u6 ?* c5 F$ X( P
net-conf.json 内的 Network 参数需要和 controller-manager 的 --cluster-cidr 参数一致6 ~$ N# X3 u6 F. }  `/ F
6 J. d4 M: _. Q! f& @- H8 O) X
---! T" z9 H: u, ?8 [9 P0 W/ a
apiVersion: policy/v1beta1- i$ z# ~, S/ m& ~/ o9 z: Q$ O9 Z
kind: PodSecurityPolicy$ k7 z7 w  i3 q% u% y3 ^
metadata:
4 [: ]; }8 E; r- A; y% J1 J/ i5 }  name: psp.flannel.unprivileged4 d* d& e" R) ~. D+ w& E, X. i
  annotations:3 J- v0 B- r, O/ _8 w. o3 D4 ^; h
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default9 @, N6 z' J9 N5 r! ~6 N) ]  j( S
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
5 Q5 ?: Y: d9 S% w. D6 w, n1 w    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default3 y$ U7 z7 `/ J/ D
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
- {# n- u5 `: s6 a$ P  [. s9 z. _spec:" m! S4 V! c, R, O
  privileged: false8 F0 ~8 u9 w' M
  volumes:
% }- A. h2 c% r+ w* N; u  - configMap& F* z% H/ V- `7 D' Z3 s0 S0 a8 A
  - secret
4 [* a1 b: G; l  - emptyDir& t. Q0 X1 j- X% l. \8 d
  - hostPath6 E1 D( T% f( j
  allowedHostPaths:* g6 n" Z' ^, w1 i; ~+ M: a" x
  - pathPrefix: "/etc/cni/net.d"9 r+ i, e- [4 v
  - pathPrefix: "/etc/kube-flannel"8 n  y5 |6 Y! q( ]8 h
  - pathPrefix: "/run/flannel", s7 y$ U5 r) F& \# s
  readOnlyRootFilesystem: false
$ P( C# l3 B1 }" D( ]  # Users and groups
! B: c- c" W; J5 X6 w. p( ~, R  runAsUser:
: Z. @' K2 |/ }: {/ j1 q    rule: RunAsAny
/ S! ?9 M% [  j) l1 W- B3 }  supplementalGroups:
& C2 B* v, y4 x    rule: RunAsAny
( {# b/ J$ s. W9 w% j% o9 H  fsGroup:- L3 R  Y& j: M" q
    rule: RunAsAny! w5 l5 M8 D5 L: D1 }+ ], Q: C, w
  # Privilege Escalation
: ~8 M% f6 Q- `3 b6 _  allowPrivilegeEscalation: false
1 J* o) b6 X' D4 K: }6 i, G. U  defaultAllowPrivilegeEscalation: false0 C6 ?; l- R. Q
  # Capabilities; \8 b9 h  r5 C: E* r' K
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
% R2 P  o% }# A1 a" f3 W$ f8 ?  defaultAddCapabilities: []( Z, o  I( ~# O5 c4 K* H' k
  requiredDropCapabilities: []
, `# {% f6 m% k  s, y  # Host namespaces
9 h# m5 E) \8 O' k# ~/ E9 X  hostPID: false( q. |& f' K6 j. W0 U+ u% E- h9 S
  hostIPC: false
; {( z; l3 c8 H  hostNetwork: true( I: X' U' T: \3 @7 J# E
  hostPorts:
: Z$ ~9 Q; X% j+ v  - min: 04 Z  @( Q; S) o4 U( d8 u% l/ ?
    max: 65535- D7 e3 U0 d6 k3 g
  # SELinux: U8 Z. A5 }! M* }1 [+ C0 ]
  seLinux:
. K1 K5 ^! z6 _/ s% |  b2 U    # SELinux is unused in CaaSP+ Z: @+ H* ]6 x& y$ a9 a+ A1 U# S
    rule: 'RunAsAny'$ O4 m4 z9 f3 `' z& L
---) H* |5 H% N! d! K
kind: ClusterRole
/ p# @  ~# J/ C/ Z: mapiVersion: rbac.authorization.k8s.io/v1
8 Y* [+ D- E" o* T5 X4 e! {* b, ymetadata:" ^2 o  R. D  a
  name: flannel, v/ T4 p+ U, b( F1 a( |+ F2 j
rules:% _5 M, z+ s& C
- apiGroups: ['policy']* l$ b6 |4 Y1 n7 I! Z
  resources: ['podsecuritypolicies']
4 `) G6 t2 b& \( Z4 l4 K- T  verbs: ['use']
8 x: ]) X/ O* P- ^  resourceNames: ['psp.flannel.unprivileged']7 E+ l/ O" R  Z6 Z- C( {4 g
- apiGroups:
" S3 D' l2 |- b) }  - ""+ T  T: }+ s# g% c+ R7 h" j
  resources:
5 ~' U; C" c" x/ Y2 m% `4 H  - pods
& @% S- [5 s1 F9 d  verbs:
. |: ?1 I* L5 U! I  h) l7 B  - get, D$ a+ q. P) }6 d$ m: K7 D& \
- apiGroups:4 {4 _  J9 x; @5 o/ n) f
  - ""
* Y, [6 x5 q% ^& O  resources:
  e0 E! {) a/ N8 @7 o  - nodes" e" m: Q: ^* x
  verbs:2 [: A! a$ {' t" |7 c3 t( r
  - list  y- j# R) q, O# G% x, I
  - watch5 e& J3 Z+ C5 b( @
- apiGroups:
) ~# u+ D0 a& i5 n2 ^2 \0 }  - ""* A/ n. X& J6 R$ Y9 D# Q
  resources:9 i! I) D3 i% \+ U
  - nodes/status
+ P2 K" ]0 J1 E5 G9 g  verbs:% s9 {3 \) ?; W/ {" h1 L0 m9 {: z
  - patch
0 {- c& K& R2 x# z) x" p/ J---, D/ G8 \  V" L. F& M4 m
kind: ClusterRoleBinding
  h$ t: w" C- q$ T  ]  JapiVersion: rbac.authorization.k8s.io/v1
5 w, N) G, w! V7 Wmetadata:
. M$ u$ T% G8 X' L* S& y/ E1 \, P  name: flannel
" P, @8 v9 k4 q6 A% QroleRef:
% n1 f( W; A8 e. Y0 |7 a% ~  apiGroup: rbac.authorization.k8s.io
6 V$ @' |- ]" `0 g' N& e. T  kind: ClusterRole
' X- o; [. u6 F  U% p* h8 _  name: flannel! ^6 P; i9 X) C3 o" S! s
subjects:
/ V9 K5 v) ^  x' K# g; C( H- kind: ServiceAccount. ~, N' h( \# k% Y  w! ^" s7 q
  name: flannel
+ b7 ]% j/ B; N  namespace: kube-system6 V* ], `# Y, j
---
3 l; e, F8 }- c7 H. UapiVersion: v1
0 e& X+ a. N) r9 K" Zkind: ServiceAccount
4 P! R: U! g  lmetadata:9 U" F( Q* ^3 g
  name: flannel- q2 a! t$ M$ a6 Z$ V' S
  namespace: kube-system
0 @* H. J2 C) u' K2 [8 P---" s+ p7 C; j; r7 p! O. e' V
kind: ConfigMap. w# {% |* y* T
apiVersion: v1. X/ F2 Y0 ~5 O. r% Q- ~4 z
metadata:
! B/ V- t; F4 N; a, y- G' _5 i  name: kube-flannel-cfg0 v) m; r  C7 W+ _. A
  namespace: kube-system
* i* k! m  e9 z1 h  labels:
- f% @  l; k7 Q/ ~7 ^' ?- V    tier: node# {  r4 Z, g2 l/ t# p! q
    app: flannel9 g$ I" L' e1 G! ~
data:
! u9 R! H: r2 s* Q5 R$ h8 u: F  cni-conf.json: |+ a( s3 `6 s3 f
    {- O* V; g# ?3 |* K! e
      "name": "cbr0",
  d+ ~5 W- T0 r3 |  ?      "cniVersion": "0.3.1",- _& h0 c9 ]0 P/ `
      "plugins": [" n, S; ?- n; v
        {$ N- {, E* D) O
          "type": "flannel",
  u' d4 u/ ?- p# G* p          "delegate": {/ Z" p* j! \2 i
            "hairpinMode": true,9 {+ j/ u' o" N% {8 H
            "isDefaultGateway": true
* k8 S" ]! a9 j9 l: ]) N% D% F  U* @          }
9 `/ z; l0 i& p4 h" I* U& f  l* p4 a        },
& r+ p7 M8 D! u* r7 x' N        {* Q. w* r; f: m& Y- }5 s( y: t
          "type": "portmap",
$ A2 f6 J* j9 @& K4 {: T$ l" y          "capabilities": {  M  x0 c9 [2 A* i/ H
            "portMappings": true
4 G$ z! J& D/ Q          }
( \  O" N( e; I- M& X3 w" l        }" q3 r& f8 W! r+ D' d, `) a
      ]
5 f  X0 }4 m% `% F4 r: \    }
) S* D% r! J" L  net-conf.json: |
  F2 S# D6 M6 W1 w2 o    {3 _! q& ?* q/ ^) D
      "Network": "172.20.0.0/16",2 }' b# q* s' V  o& Q0 O" C, u
      "Backend": {: F5 h, }- F3 |8 _- I+ z
        "Type": "vxlan"5 z: ~/ ~) @# {6 V9 c: u
      }
% D+ e; C+ ?9 O4 m1 b% M7 _    }/ d' n) k8 C% ]- @
---1 H3 j' I9 C  p3 b9 E7 U. K
apiVersion: apps/v1! q. O) a. c1 i) D$ {: Y5 t
kind: DaemonSet* F  G; j' A9 l5 r. |4 d3 W
metadata:
1 U2 ~# x4 X3 Z1 a/ I) }+ z& c( m  name: kube-flannel-ds
/ W. f2 i" a: y" U' z7 ?- P+ r# I  namespace: kube-system  l. n+ A, B+ M' I5 q
  labels:  P0 c9 i- K4 h$ d1 |8 G+ c
    tier: node
0 m$ W" G" Z3 d6 q: S0 a    app: flannel
% m$ ~2 X' b* G2 D+ s8 o: ]# Nspec:6 `5 o% L8 W7 Q0 L
  selector:/ U; A8 p' v# |
    matchLabels:
: c0 h; b, R8 _) Z. T% n! R      app: flannel
2 r$ K, g& k# K8 H) J  template:6 q. i- _0 f- O+ y# L% a# D
    metadata:: N- [/ k: ]+ n6 u6 `
      labels:2 b  L; N& K8 q- X1 H
        tier: node
  U* B$ z& e4 S% X        app: flannel: L8 r  P, B" V6 t4 _
    spec:
/ B8 G/ e1 h3 ]+ R& g% Q- U      affinity:0 }1 X! ^; n" d, o+ w' T& a9 `
        nodeAffinity:, M0 d1 J: y/ J4 b
          requiredDuringSchedulingIgnoredDuringExecution:- Z, q9 f8 m( R; z; e/ T6 g
            nodeSelectorTerms:3 e: {, |: e7 L
            - matchExpressions:
/ b4 b1 y& N; n( t: t. `              - key: kubernetes.io/os
" P8 \. c9 Y, X. d$ O/ [                operator: In
+ ?. B0 Q( u) K" C- V                values:
; q. f  s2 l; p+ c                - linux
* _$ x% V' y' p- h9 E; T      hostNetwork: true: P0 A9 M3 k3 f9 E# l& b3 ^4 o! [
      priorityClassName: system-node-critical3 Y+ Q1 @% s# X1 b6 _
      tolerations:. d- c! V+ v6 z' d
      - operator: Exists1 F1 U9 x0 P2 J! P  [% {3 `3 K
        effect: NoSchedule
3 N3 s/ C1 F( k% O+ L      serviceAccountName: flannel! s' L% c; V8 @8 P
      initContainers:
# q- `6 w& t. q3 [( k9 w( @+ Q+ g4 N      - name: install-cni
9 n7 C5 K4 a# K        image: quay.io/coreos/flannel:v0.15.1; L# @- C+ }  B, q
        command:/ C  E; |( x8 K8 a# W
        - cp' d( I$ H% k, w
        args:5 k9 w! D5 k4 D3 L
        - -f
: M% X" c5 b# ~4 c6 G' ^+ h        - /etc/kube-flannel/cni-conf.json- d# R( f9 o3 K
        - /etc/cni/net.d/10-flannel.conflist
7 D! V, M- ?% g3 j) ^        volumeMounts:/ y5 v1 j" ?1 E- G* n$ S
        - name: cni
' K# x2 ~% s& v% D/ Z          mountPath: /etc/cni/net.d/ `! H3 n! A/ x6 B5 W
        - name: flannel-cfg$ B/ O2 V+ S' C& s7 P- n! V1 p; v! i
          mountPath: /etc/kube-flannel/  ^$ T( l, P3 x% F2 U; b
      containers:$ `0 q8 a, Y: ?& F, e: w
      - name: kube-flannel
2 r0 B3 D5 J+ {1 y* m        image: quay.io/coreos/flannel:v0.15.1
8 U) w" B7 l" \, _        command:' \' P. c+ g7 h5 b- J
        - /opt/bin/flanneld' S( n5 l" ]" w
        args:- _& g! F1 |- x
        - --ip-masq+ Z" }* T6 m; G- [; n* `: C
        - --kube-subnet-mgr6 h2 H% W" H& ]  o1 F- n. r4 P
        resources:; }3 \$ f' M2 G- h9 S  U1 {; M) h
          requests:7 n3 L9 S; V7 @8 P; A, w
            cpu: "100m"7 \3 U' e: h7 @; T4 w
            memory: "50Mi"" W2 ?+ r& f7 y1 H& W$ u
          limits:
# l5 a  A- m, E            cpu: "100m"4 t! V3 a; K; F/ f+ h) a
            memory: "50Mi"
% @3 s  f9 P- G4 {9 W1 I+ d        securityContext:
1 z  W1 C' J' q) }% e) d          privileged: false
% ]8 S# c$ ]7 e8 J          capabilities:
; i" n* e+ c& Q) l            add: ["NET_ADMIN", "NET_RAW"]
% f, n& I- G5 ]  D1 n! q, E. L( A        env:* t' ~) M! S4 H3 |' [0 f: H6 f
        - name: POD_NAME. }! ]3 \$ y# l3 X/ v2 {
          valueFrom:
3 A+ F' _$ q4 A- I  h. E            fieldRef:' [* n) v; Q+ t9 n: r4 I+ u7 W# X
              fieldPath: metadata.name
0 W4 a) g$ [' w6 P6 r        - name: POD_NAMESPACE( \/ w( R% B2 s& e* O
          valueFrom:
" w0 ], `8 f* {4 P& a8 p            fieldRef:
# A7 ^. O( N. _: c* m7 k              fieldPath: metadata.namespace% P0 p4 K- U0 q+ a- @% W
        volumeMounts:
' u) n% B) R* l0 j! I        - name: run
  s- O1 v9 D, c( X) |, {" D          mountPath: /run/flannel
, _3 R) Q7 D) P$ e& h1 F, C        - name: flannel-cfg8 _3 A# F, u, A) F! r2 W3 S6 Z. `
          mountPath: /etc/kube-flannel/! q$ S9 a/ r, t5 j3 t- t9 F
      volumes:4 c3 ~& x  u3 }8 O! Q
      - name: run6 p  E; Q0 c" A0 ^" ?3 }8 C
        hostPath:7 o( T$ y8 [, S, `7 y0 q
          path: /run/flannel8 c" M! M% v6 I* O! ~
      - name: cni
1 Y2 g( h/ A1 p$ q        hostPath:5 |: q, y$ r2 _7 P5 ]$ H
          path: /etc/cni/net.d& r% y! G6 D, Y6 Y: k8 K* [
      - name: flannel-cfg/ m, x8 w3 @+ m( S/ a6 E
        configMap:
& w4 q' S5 j9 X) ?$ p          name: kube-flannel-cfg
; a0 V) J( o: J& j1 e5 @: o4 ^( i; ^; @配置 flannel cni 网卡配置文件$ ^/ u) q' R) q! F: u7 h
vim /approot1/k8s/tmp/service/10-flannel.conflist
* p2 M; \- r- S3 s9 [{
6 N4 I; k% W/ t& T# O' y( ^: v5 L  "name": "cbr0",
4 f" K: @' A  e  "cniVersion": "0.3.1",
2 p$ ^' I# K$ F7 I4 z* j& F+ K  "plugins": [
9 ~7 [; ?% E  z2 m- L/ H    {
( N2 e# \1 k- M" J; H+ j. l      "type": "flannel",2 H( o% p/ \$ G+ R4 |! B( f
      "delegate": {8 m& ^+ `2 ~1 q% L/ r& d# G
        "hairpinMode": true,
% P; l7 k+ C0 O8 J: N  z  ]        "isDefaultGateway": true
5 l' \& P8 j7 A      }
8 K8 ]1 x% Z$ A' w    },3 k3 V$ F: o; K1 N" K8 t
    {. ?" }! ^; U# b. b
      "type": "portmap",( \% Z% D% O9 Y- \4 `$ I9 _, q& |; x
      "capabilities": {
& x7 |" G5 \$ \. e! s        "portMappings": true# r, I% s  b  r! O( h) v
      }; f& @  t. c: e% X
    }
+ x/ b1 c, ~' M& ~0 _* G4 p0 [  ]  r& ^8 y5 |& ^0 G: k  w
}! [: }$ o7 C5 j0 ?8 i6 |5 C/ c9 |" ^
导入 flannel 镜像0 [/ o: k! K, ^8 P4 ~
for i in 192.168.91.19 192.168.91.20;do \
8 H5 ^4 e- J# F, r* lscp /approot1/k8s/images/flannel-v0.15.1.tar $i:/tmp// N% t1 t6 W; S: j( X
ssh $i "ctr -n=k8s.io image import /tmp/flannel-v0.15.1.tar && rm -f /tmp/flannel-v0.15.1.tar"; \
8 M& F, H" m4 @, j# s' ]8 _/ |3 Sdone6 |& _3 z4 L' Z. i% o  v5 d8 N
查看镜像
8 [( X% J) [2 x/ ~' T* g; B
% ^/ T. i, s; jfor i in 192.168.91.19 192.168.91.20;do \
- i9 C& f- }+ _5 v4 sssh $i "ctr -n=k8s.io image list | grep flannel"; \: `, D# ]) j  J" d3 @1 j
done$ K% P% G3 g5 L/ m/ d
分发 flannel cni 网卡配置文件
$ f! @& e' g9 {4 m. W( }6 Hfor i in 192.168.91.19 192.168.91.20;do \
! {( ]# Z6 z" E* t- Y9 J; dssh $i "rm -f /etc/cni/net.d/10-default.conf"; \6 g7 N( v* s  B( U) a
scp /approot1/k8s/tmp/service/10-flannel.conflist $i:/etc/cni/net.d/; \: h# i0 E0 D% s7 \  a- {" |
done# I) Q( y2 J( {- Z
分发完 flannel cni 网卡配置文件后,节点会出现暂时的 NotReady 状态,需要等到节点都变回 Ready 状态后,再运行 flannel 组件* S. [3 k! q2 `8 q: ^5 c

& D- Z6 H) K) Z+ Q* Y在 k8s 中运行 flannel 组件- H- p1 G8 c+ d. r- M
kubectl apply -f /approot1/k8s/tmp/service/flannel.yaml
, g% T0 j$ Q+ p- s检查 flannel pod 是否运行成功
* X7 i2 {$ \4 p! j9 P) a  u- Okubectl get pod -n kube-system | grep flannel  y7 d" V  x1 P- ]; ~
预期输出类似如下结果) d7 z; ~7 c$ F9 I: q  D

8 d' E, E7 f( d5 ~$ j" Xflannel 属于 DaemonSet ,属于和节点共存亡类型的 pod ,k8s 有多少 node ,flannel 就有多少 pod ,当 node 被删除的时候, flannel pod 也会随之删除
$ l2 M. z7 c+ i# E* J- D
) J, _7 A) ~* |" |2 r$ k2 d/ Wkube-flannel-ds-86rrv   1/1     Running       0          8m54s+ l; y  v4 s  A
kube-flannel-ds-bkgzx   1/1     Running       0          8m53s6 i$ M; D$ t1 `1 w, \/ y( t
suse 12 发行版会出现 Init:CreateContainerError 的情况,此时需要 kubectl describe pod -n kube-system <flannel_pod_name> 查看报错原因,Error: failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH 出现这个报错,只需要使用 which apparmor_parser 找到 apparmor_parser 所在路径,然后做一个软连接到 kubelet 命令所在目录即可,然后重启 pod ,注意,所有 flannel 所在节点都需要执行这个软连接操作
. W$ R! w- x+ \7 ]1 K' n- F5 O! W3 G8 ?5 M7 [6 k5 Z
部署 coredns 组件) N  T7 S1 H1 ]3 s
配置 coredns yaml 文件
& @3 ?4 V, F1 N& Svim /approot1/k8s/tmp/service/coredns.yaml
; s9 x: W4 ?: ?; E0 b* DclusterIP 参数要和 kubelet 配置文件的 clusterDNS 参数一致
+ M4 R3 _; ?) l4 A( P
  t4 j1 u" L- l' E9 ?1 \. m& fapiVersion: v1
# I& c4 W# P. d( w, e: t1 Fkind: ServiceAccount
4 ^; U. u3 {: Imetadata:) {( h( {$ l! i! B. Z2 S7 T% b
  name: coredns1 ~6 L) N/ G$ F" P8 }
  namespace: kube-system
/ F6 d+ |( o) L  labels:
1 s- }. J- R* k1 l      kubernetes.io/cluster-service: "true". i0 s0 o- G4 z' J+ ~" i! w. H$ N
      addonmanager.kubernetes.io/mode: Reconcile6 y0 \. Z. W% v: |* U+ J. u; k
---
/ ^( c5 a) B% S% }+ C* p1 z* A7 J1 gapiVersion: rbac.authorization.k8s.io/v19 c: o+ Q; j9 l6 P2 c" Z1 r
kind: ClusterRole
0 s" Q, u5 B3 w! {metadata:
6 s" _( n7 z  G$ `; {% r  labels:; ^8 \/ E% F9 E4 j
    kubernetes.io/bootstrapping: rbac-defaults5 f3 {' R* }7 s( ?
    addonmanager.kubernetes.io/mode: Reconcile, B/ V2 d4 s3 @$ H2 `
  name: system:coredns
# }7 x1 Q- Q2 l' s4 w9 g  p' j' Grules:
# c& R; U( Z4 a/ z. r# `- apiGroups:
; Z" l# W; U* K& l- d3 y  - "", k  q8 Q0 Z& B1 ?9 }) Z
  resources:/ E9 E/ J6 m9 K, p6 Q
  - endpoints
0 q& _, l3 k7 _  - services1 O+ h  R6 f6 `
  - pods" l  G1 @* Y$ U3 J; R
  - namespaces$ [3 W: @) k4 N
  verbs:
+ V: E( }: A3 P$ g7 M. {5 d$ q4 m  - list
/ `6 C6 j/ G" G) J  - watch
) x( y, E& W! ~; k; d- apiGroups:
/ u1 s+ N  R8 g6 W7 ~+ b  `  - ""6 R/ \0 n) n2 ]9 d1 `
  resources:
, p- _' t3 L  W. u( i  - nodes% ]3 ^# l" A/ \( n9 @2 \% Z
  verbs:
- S# A- v( k3 P; O' s3 t7 t6 L  - get
% P. @9 m: v1 A  N# u- apiGroups:
; H3 b) q- j! s# w  - discovery.k8s.io% a% w, _, I+ Q7 F
  resources:' d/ y. w: W, P( \$ j
  - endpointslices) _, K8 e2 i# T8 `& p/ e
  verbs:
' J7 b* e! y5 y2 A; p  - list8 I+ D2 p. j+ d: S
  - watch
, a* h: q% s  S. U/ K---+ [. D: N9 Z- n
apiVersion: rbac.authorization.k8s.io/v1
$ E2 W# e/ k: {kind: ClusterRoleBinding# h8 P- f. U6 e9 n
metadata:/ e8 B% H" n) F. S: `: A
  annotations:! H- @$ d3 M! {
    rbac.authorization.kubernetes.io/autoupdate: "true"3 B9 ^; N9 h5 z" B# o8 Z! `
  labels:
5 o3 f9 z# T: K- K    kubernetes.io/bootstrapping: rbac-defaults
: e+ P- Z5 [% |5 _$ g    addonmanager.kubernetes.io/mode: EnsureExists; C& G3 }' r" @, v9 {
  name: system:coredns- j' M% K* e) f
roleRef:
/ J; Q/ Z6 D4 l+ Q" y  apiGroup: rbac.authorization.k8s.io
2 T0 X: p% I5 s! ?3 F8 \  kind: ClusterRole+ n$ b; a! i- r2 Q% `6 e
  name: system:coredns
- o5 D  C: Q3 K" psubjects:
% e) G; N$ \6 R: `9 ^' W- kind: ServiceAccount0 b8 z9 j. D' L0 u3 h
  name: coredns
9 b- A/ K& a) G2 X% [0 L  namespace: kube-system
+ p& Y6 y* \; e5 w' V( l! Y( I---
% q3 }: K  N# EapiVersion: v1
: i2 l+ a+ l" x$ Q" Mkind: ConfigMap: D& W: o$ \) k
metadata:
, u$ f; ]1 y6 c% w: W  name: coredns- |9 t) o5 C7 A8 V
  namespace: kube-system
$ J# h% N) B3 i; k8 u' E  labels:# ~2 k/ w5 v, c1 r" p) E
      addonmanager.kubernetes.io/mode: EnsureExists
# g1 }* o/ S: V( R  Gdata:4 ?4 G; ?0 @% H- k3 u
  Corefile: |
8 }. D- b2 F  n! X8 O$ \' ^    .:53 {
5 X' @" \% X& b( O9 i        errors- k3 f6 |" `+ Q  z3 ~& Z. X) J+ C
        health {, K0 \2 c, @* o: d3 O. O  Q! g
            lameduck 5s0 o! ~$ S& n/ [; {7 h1 D
        }; v" a7 B7 o/ Z' m; b4 p
        ready$ H# H" q  Y% V  T
        kubernetes cluster.local in-addr.arpa ip6.arpa {) J  l9 r  h% Z4 t; g6 {
            pods insecure. c9 m- E( q' ~0 m/ r, u/ S
            fallthrough in-addr.arpa ip6.arpa
5 \$ s7 b8 M/ q" F3 s- H            ttl 307 ^2 n5 k( @; r/ j- q
        }3 O9 `+ v; I. U5 R5 e& w- Q
        prometheus :9153
$ G! ~. [. Q: ]+ R8 M+ v        forward . /etc/resolv.conf {
4 N# }" ^6 v7 G, G% U' F; H) \7 m            max_concurrent 1000' V  ^  _' ^" w+ a- `: @9 R
        }
0 ^+ A. A. e% M  x, |+ e1 @        cache 30
9 a. z3 F/ c: |1 v: ?6 Z, t        reload
8 Q5 O8 c% J, J) V" b' z+ K% X        loadbalance$ Q# E/ @* P1 A% h
    }
$ s1 L8 c- C) k) I---
3 I$ E2 W# }+ {) M4 \! I; MapiVersion: apps/v1. D8 J# R$ {# X/ @' r
kind: Deployment
/ J# @( P0 V, m) umetadata:- m! B: H& n- ?; S
  name: coredns) g, K# J) S: O" s5 {- u: M
  namespace: kube-system
. U7 g+ G# q7 B3 Y$ G" z  labels:7 p3 |/ h5 w1 W1 S0 ]' q7 P
    k8s-app: kube-dns
$ V" _: d/ D' g( _; R6 [! g    kubernetes.io/cluster-service: "true"% C0 M( Q: b# K  T$ q$ w
    addonmanager.kubernetes.io/mode: Reconcile
* g2 L. d4 }0 b/ ~& e6 i( c    kubernetes.io/name: "CoreDNS"; R8 G; q, T& k& q* s/ {
spec:
9 V. ~! K# t: D6 A9 _  replicas: 1
  |3 Q+ Y9 m+ @; d# o  strategy:2 i; |, o& H  j6 F. R: f) U  S# \4 k
    type: RollingUpdate% i& A& X/ R9 @6 x
    rollingUpdate:, w6 O/ X' G7 n3 s" s% Z. f
      maxUnavailable: 18 g2 a$ }3 o7 j; w6 s5 c
  selector:
6 {8 k! M- h  |- r% l1 @    matchLabels:* t6 ?1 ^; v+ _- X, \; K- O
      k8s-app: kube-dns4 X4 l% l3 y4 i1 s! [
  template:0 _/ U2 P7 S4 O- a$ I  f9 w
    metadata:
4 |( B  v1 K) F! O& D0 v2 H      labels:
! X0 x: S+ O8 `, O6 P, h# L1 [$ _' O5 R        k8s-app: kube-dns
9 d- `0 Q3 E7 S# o5 t/ ?$ H* w    spec:
8 V$ K. N( F6 z6 ]1 P      securityContext:+ W6 i1 J: Q) B2 U0 X
        seccompProfile:* q9 e$ Z3 u( |! [
          type: RuntimeDefault
7 S0 L0 J" A/ e      priorityClassName: system-cluster-critical
" }# z2 g3 i0 Z6 g      serviceAccountName: coredns; r' _3 J. c$ A7 d+ x- T$ i3 E' W
      affinity:- ?) d/ B" l+ X5 |& \1 S7 T5 T
        podAntiAffinity:
0 V8 ^8 S  B: O/ X4 A+ E( J* ]          preferredDuringSchedulingIgnoredDuringExecution:
# M  b0 k. i! J3 D          - weight: 100
; j$ f# F$ m# g* W, t8 h            podAffinityTerm:. ]8 S0 s" e- B9 R( w- q& o- y+ P
              labelSelector:. T5 @7 k" A& X5 Y1 U
                matchExpressions:
% ~& t& v  H% \/ h5 L$ t' q                  - key: k8s-app9 T+ J8 l2 r* d6 e1 X2 Z* r
                    operator: In
, L9 E2 N. ?+ V                    values: ["kube-dns"], t+ b  V" h; n' H9 W( Y, W6 b; [
              topologyKey: kubernetes.io/hostname/ b2 @2 f& [/ }2 r' T' `+ _) |
      tolerations:/ \! d+ @; Z. _. I  P
        - key: "CriticalAddonsOnly"
$ ?7 g5 Q! h5 D0 {% ?5 ]% L! L1 y          operator: "Exists"
8 ?/ m+ O7 g2 F( m. k* z5 C- {* K      nodeSelector:
' \% R+ H& Y0 L9 z        kubernetes.io/os: linux& c; m; G) p6 v
      containers:3 p: G: Y* i; ]9 {% A1 F
      - name: coredns2 b0 G  [, J9 U! R' t/ Z' a
        image: docker.io/coredns/coredns:1.8.6
0 Y+ g$ g3 {' k1 Q. V3 W( `        imagePullPolicy: IfNotPresent
. l& O( |$ K6 C" F$ Y* l- Q        resources:$ d8 Y7 h# d! b7 B3 a2 o/ `
          limits:0 H+ F2 x6 H* W1 B
            memory: 300Mi
/ C& Y" U. d* A# B! V2 |! I! P: }          requests:
3 f8 |' w9 z" ]* ?8 I            cpu: 100m4 M3 ]+ z$ z$ _$ ^' a1 O4 x8 k  v
            memory: 70Mi
. t7 m' @. O# j, |. [4 M9 R        args: [ "-conf", "/etc/coredns/Corefile" ]! o" v& [  V& H0 `3 M
        volumeMounts:+ S7 ?: L( x/ J, p. {" [
        - name: config-volume2 u$ d2 B0 p  ~: Q& p0 P, A: R$ e
          mountPath: /etc/coredns( s% S4 m! o& u! ~* S
          readOnly: true
& A1 W6 R! K+ c' d6 w3 e3 ^        ports:: ?3 B' g1 M* @! Q
        - containerPort: 53
$ ?' U7 w$ D8 e" l          name: dns
, P! F7 k+ w1 q- `3 A: N          protocol: UDP8 \7 j% Y+ M! |- i: I8 I. C
        - containerPort: 538 S9 z* B( ]- `1 c# q/ _, ~
          name: dns-tcp/ Z0 ~+ r" o) W  v8 }! N
          protocol: TCP/ P8 v/ L, X" u! i7 `6 }
        - containerPort: 9153
- H6 G) z, d8 f$ D# R! b7 c          name: metrics
3 ]* P% G/ S1 G/ c2 Z0 S          protocol: TCP# G/ U$ [! o2 o+ I3 _% w
        livenessProbe:/ J) m, g* `$ `( z) v! o& n* R  F, B
          httpGet:
. g  r3 Y/ V, f- ?4 p" O0 M            path: /health, w0 I/ W$ b1 y( @
            port: 8080# ?/ B9 Z4 c5 F
            scheme: HTTP
+ {! X- ], t. o% D          initialDelaySeconds: 603 B6 B; V# O4 O$ j- `
          timeoutSeconds: 5! ~5 }1 z) p% c$ g
          successThreshold: 1
3 j( v* j8 I2 X& y9 e0 `4 u          failureThreshold: 5
+ M( s& R, ~( G0 N: ]& k* x5 M        readinessProbe:0 @2 C1 D, W) S) d+ k7 v* N6 f$ n
          httpGet:
" [9 u. h. X" v: i  X2 I# X6 T            path: /ready* P) j: E/ d9 m. @( [
            port: 8181
" Z- m# B% R  @            scheme: HTTP/ ^0 h8 i+ u- V1 M9 \
        securityContext:0 R8 Y- D- R, J7 u2 E/ m, ?3 B: L
          allowPrivilegeEscalation: false" L: p6 K. G  ?7 T- G* F4 ]# W5 n
          capabilities:4 F% w- S; R- E5 ?
            add:
. i0 n# {9 Z2 C( @! J            - NET_BIND_SERVICE2 Q" F% v0 z5 y) N) A
            drop:
* j0 w3 d- }2 S: Z5 O) c            - all9 g8 ?$ {& B9 @$ B: \: a$ p
          readOnlyRootFilesystem: true8 g8 ~2 m9 G0 J' c' a  p# N, ?
      dnsPolicy: Default5 @* O4 ]7 l- i0 N
      volumes:+ _$ Z2 H4 k8 M( _9 ~1 W/ ]' W
        - name: config-volume
% ]7 s2 s  N: r8 H5 [5 j          configMap:+ e5 j! }% r6 n( o+ N! X
            name: coredns2 s  |, {( r2 A* i7 f/ a
            items:3 [2 T9 x1 g8 U8 m; j9 {4 _& d
            - key: Corefile9 K, F7 ~7 V( |" r. g
              path: Corefile
+ _; P' u1 J7 p* A: J& D---4 K% u9 l; G; n9 I( N" X' V1 e
apiVersion: v1( e% |% v% f; q- u$ e
kind: Service! M8 L$ W4 t. S7 N+ h1 @
metadata:
1 [& u2 C+ T6 K  name: kube-dns, U5 ^: }2 @- P: s; U  S# O
  namespace: kube-system
% k" b% y8 e' t8 ^7 R5 b3 v) I6 S+ }  annotations:
( _# O8 }: m% E' c# t$ Z8 Q: q    prometheus.io/port: "9153"
3 H6 @' m0 t4 D5 O8 \. m    prometheus.io/scrape: "true"2 o. U, I) u( A% s' }
  labels:
7 N, G& ^: {& }& F7 `/ E0 g$ w    k8s-app: kube-dns
! z6 U! p7 v) V$ J- D6 N    kubernetes.io/cluster-service: "true"8 B5 ^% b  n7 m' U* a
    addonmanager.kubernetes.io/mode: Reconcile
4 c' Z' ]9 A9 Z1 l* g1 g& |) W    kubernetes.io/name: "CoreDNS"
$ T9 m( l/ E$ pspec:1 k3 B- Y5 Q. `4 P( |- a1 J: b  V
  selector:
8 [' u8 ]( {( ~: v% ?7 R    k8s-app: kube-dns2 k7 c% u+ V8 }, ~; [- P9 e' k
  clusterIP: 10.88.0.2
! z, c- }0 a, T8 U) ]1 ]  ports:: d# J( Q- }7 ?
  - name: dns6 B# D' j% _1 A2 I* i
    port: 53: ?% Q3 [9 X8 t2 k; o+ n
    protocol: UDP
: A: j/ o& x: r! X( P$ N  - name: dns-tcp
, A4 e  {* u4 p* R$ x& l    port: 53  J. D$ _& [, |. F0 `! c
    protocol: TCP
- m0 Y/ j  N  H! t- s% e' i& F$ F) [  - name: metrics+ a  ]2 o: X0 C
    port: 9153
0 i- M# b. R) V+ H) D7 i    protocol: TCP
2 E2 I' y! F, M: b, T! E5 P导入 coredns 镜像) P2 }. a' g$ C# f( }- D# m
for i in 192.168.91.19 192.168.91.20;do \
+ N; v% C9 e* J; e$ B# Bscp /approot1/k8s/images/coredns-v1.8.6.tar $i:/tmp/
2 v' {1 Q% S7 W2 B# e) \0 O* }ssh $i "ctr -n=k8s.io image import /tmp/coredns-v1.8.6.tar && rm -f /tmp/coredns-v1.8.6.tar"; \
6 B6 U" d3 N: A' ?- B! G! T( Tdone! e& s" T4 q" m2 R
查看镜像# g& D3 f& }, E" E
# {# l" Y- N" K- |
for i in 192.168.91.19 192.168.91.20;do \6 A% O- h7 P3 G
ssh $i "ctr -n=k8s.io image list | grep coredns"; \
$ B4 `: X  N! N. C3 X' ~& Tdone- o( i8 f- \6 S1 |, U4 d
在 k8s 中运行 coredns 组件
0 |* a4 D) w; S9 gkubectl apply -f /approot1/k8s/tmp/service/coredns.yaml
( H" n$ m% P. G/ p! ~$ Q/ ^( O检查 coredns pod 是否运行成功
) ?3 l7 J1 f2 g+ N; {" e  x" R9 ?kubectl get pod -n kube-system | grep coredns
4 |5 ?4 Z8 \5 a6 ]% t8 T5 A预期输出类似如下结果9 e$ E4 b! S- f" J  e
: Y* Y2 ?) B, A' ~3 ^
因为 coredns yaml 文件内的 replicas 参数是 1 ,因此这里只有一个 pod ,如果改成 2 ,就会出现两个 pod4 H3 Y& u8 I- P( n2 l0 f

- |7 X" l9 d0 B/ D" ~0 Q( l; a  a2 icoredns-5fd74ff788-cddqf   1/1     Running       0          10s
' u: {! ?  A/ x5 r0 {: A7 G3 Y部署 metrics-server 组件1 e+ ~. s; |, _, ]
配置 metrics-server yaml 文件
6 n+ t  F$ T* u% }, H8 u6 Dvim /approot1/k8s/tmp/service/metrics-server.yaml- X# M& q% y- G8 C' `6 B; U
apiVersion: v1/ @2 W3 ?8 S8 C: y
kind: ServiceAccount) _+ Q& Y' T% R0 P, B& Z1 g
metadata:/ A, k6 r( b  r! [
  labels:
5 g1 e2 y" T' x. e9 O5 @6 ^. e    k8s-app: metrics-server9 ?$ r4 W; q/ @- T( s2 G- D
  name: metrics-server
: k9 v2 [9 I' Q1 B; }  namespace: kube-system0 @: q% C# \4 @0 n3 u1 J
---1 e6 o/ t  }. v) M$ q/ ?0 W
apiVersion: rbac.authorization.k8s.io/v1- [2 m4 D" L$ [5 D. u, q3 L3 T
kind: ClusterRole  Q& E/ L- f0 d0 m$ A- x
metadata:) H7 t; O0 C# X% I6 R! z
  labels:: N: {  f$ n; _5 l: ^
    k8s-app: metrics-server
9 Y# h3 n0 e; l" u8 @! w9 a    rbac.authorization.k8s.io/aggregate-to-admin: "true"' Z; T; O5 U( V, Z' A, ~
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
% k* K0 h- V- n/ B    rbac.authorization.k8s.io/aggregate-to-view: "true"
/ D5 L9 y: _+ B" `9 H  L9 B  name: system:aggregated-metrics-reader
8 V" H- q' _. w, O# arules:
6 R  w2 d$ D' I8 G- apiGroups:4 b% }7 z) b' s% m3 v2 F4 w
  - metrics.k8s.io
1 I& @+ [2 t8 h  resources:4 r& ]( S4 H, A3 g! i* }  @
  - pods+ Z3 M) c/ G$ y6 n, f3 `
  - nodes4 k9 n5 Y; d+ H
  verbs:8 ^0 k& n1 X( i) R# w7 ^' @
  - get
2 G& `. M/ \; G( L1 m$ u  - list3 @2 x' ?$ S; i- ^4 B, H& f; T, b$ [
  - watch
* K: ?8 R- A7 S---
: n3 L/ U, o) i3 ]" R$ kapiVersion: rbac.authorization.k8s.io/v1
9 @7 G- m$ A6 A$ {8 W5 P9 q1 ]kind: ClusterRole
) t0 S* v3 ^' P$ O/ B% Vmetadata:# l& C5 U* B: E; y
  labels:
% g9 ?$ X  Q  l    k8s-app: metrics-server) k+ D) e$ S% b0 p# X3 l
  name: system:metrics-server
% d4 ]$ @, G* q) G; M" Srules:( x2 E! F2 Z0 n
- apiGroups:- l' q8 t) O3 r
  - ""
  e# h/ @  e8 m2 c) Z  resources:
! X# }$ s* H, E% C7 U9 u3 i  - pods
1 @& n' E) e: y% O1 \  - nodes7 d: C: P4 C# c: k) G6 O$ i5 L
  - nodes/stats. e8 Q" d& N& j0 M$ }2 D
  - namespaces
1 e5 u6 ~% t9 m2 `) d! |3 ^  - configmaps
& {1 j# O8 W6 T: v  verbs:7 E2 [: v( y2 h0 d$ Q; }
  - get
; @; k4 V& j- M9 P. o  k  - list% h' P; Y% s8 P6 [& P
  - watch
. P0 x% _1 w. `/ L---
' P* {7 ^6 f. }% v0 R* CapiVersion: rbac.authorization.k8s.io/v1
4 T3 A# z+ h/ d% ?$ E" ]3 {; U7 Ukind: RoleBinding" \: a8 _: M% O+ g, a" Z! i2 C
metadata:
* X" q5 ^  Q+ @9 _  labels:
& A8 N9 x# V$ _    k8s-app: metrics-server
1 J+ E( A0 t& J6 |  name: metrics-server-auth-reader: W1 u4 ?: p+ T8 C( z% I
  namespace: kube-system
$ G- l( I4 ?- \0 J6 croleRef:9 D4 D' g! T: @) V6 I+ Z
  apiGroup: rbac.authorization.k8s.io
5 R. v; c4 i# T& }  kind: Role- L0 J" u5 ^3 W2 r" D% R$ [9 H" i
  name: extension-apiserver-authentication-reader
7 r4 s' _' W& c7 R6 |; ksubjects:
  V" `) C8 Q9 d" H& Q- kind: ServiceAccount
6 N$ F" a- ^+ Z3 o2 q0 E  name: metrics-server3 ~" D$ u2 j. x. e% m+ h
  namespace: kube-system" N' F0 @! o- h- I1 s/ I; z* M
---( _' x& A2 f+ x6 O& }
apiVersion: rbac.authorization.k8s.io/v14 i- m$ _# o9 t
kind: ClusterRoleBinding8 b( P; C1 G7 z8 C0 ~  {/ X
metadata:1 e& J! c5 n, p4 ]8 Z; ]
  labels:
; w7 K: U0 K  M3 r7 H3 ~9 K    k8s-app: metrics-server- W: c* Y) \' W  }. [3 t7 }
  name: metrics-server:system:auth-delegator
$ z; y. I; Q/ f0 P, NroleRef:
8 W& V% b9 m$ ]0 o2 C, q, D  apiGroup: rbac.authorization.k8s.io
5 D# w/ v3 G( D- B4 Z  p  kind: ClusterRole
2 o# Z6 @& q) k* r" y, z% W  name: system:auth-delegator
& ]# _1 _; |- ]/ n$ U* `. Dsubjects:
! A% d3 t; Q5 H( }- kind: ServiceAccount. ~. I" w5 {* A* H0 R
  name: metrics-server3 U* p: ^) [( y3 ~2 l4 _' K
  namespace: kube-system
0 x: ?; h+ e6 g7 d- \$ N---
- Z6 X  B! ?0 JapiVersion: rbac.authorization.k8s.io/v1& _6 l) {  V6 G; S6 U3 Y, D
kind: ClusterRoleBinding
# d% {* V0 f( y4 d4 ~metadata:
: Z( ^5 F# o. F" ?  `5 A( o) `  labels:! y  g+ B: S5 {& |6 r* O9 w
    k8s-app: metrics-server( Q- G, g) c* C5 ]  ~) ?
  name: system:metrics-server
6 I; B) M+ ~8 d9 \roleRef:
5 \$ ?" g5 C) c* r+ _$ A8 {) a  apiGroup: rbac.authorization.k8s.io# V( w' C" u# Z6 w- z
  kind: ClusterRole
/ w7 }$ b- o$ f# j( A  name: system:metrics-server
7 D4 t' M8 o; }' s5 r7 ]; v% Tsubjects:+ I2 r% K1 X" N1 k* j& B
- kind: ServiceAccount
/ F$ b3 P$ R- I, e. I) C( k  name: metrics-server
0 I0 Z2 T4 x' v4 q  namespace: kube-system0 A' x$ s: I9 j* b4 x5 d) M. e& }8 m
---2 x# A" w& V% ^4 A0 T/ p- [
apiVersion: v1
9 n; `1 n( o! J7 nkind: Service
5 B9 T+ A7 j+ {; xmetadata:
5 m' z0 K. i* t! H" N2 ~/ T$ _8 K. ^; `  labels:7 ~* u4 m. ?/ a4 w: L5 X8 q1 H: y! d/ {2 v
    k8s-app: metrics-server4 [3 x8 _# q/ B+ ?) C
  name: metrics-server+ w- s( @0 }7 @1 o* l% r; o
  namespace: kube-system
6 u- ?( @' L3 t9 t( P  F) Xspec:; D/ ?: |% |+ Y! l7 \
  ports:
$ r. f7 @7 O, v% h  - name: https
, w* n$ B+ ]4 t' N3 K" A! v2 v+ U    port: 443! U) r1 L9 V' r( p- c
    protocol: TCP0 O7 ^6 H9 y7 A3 u4 d
    targetPort: https* ?6 T4 y' [' W! Z
  selector:
& a/ r3 ?- s  t% p7 y6 x3 V    k8s-app: metrics-server
: p: d' V# a) Z2 c---4 l/ `, {/ A' o
apiVersion: apps/v11 ]6 B) W7 d$ S- _
kind: Deployment
, [% y, C; j5 z$ I4 B6 Cmetadata:3 P% D% G; H# f+ f, |4 e3 |' Z
  labels:
  B3 X3 d& ]! F/ t' X    k8s-app: metrics-server
. D* `' N& ~* n' S5 r; ?8 e: m  name: metrics-server
" z$ b( W2 Y* U7 R0 y- z  namespace: kube-system: ~* X; }4 {$ @4 R
spec:; [4 L1 v$ K; j
  selector:# ^8 z3 V$ K# W8 C
    matchLabels:0 X# `  b" w; G# \: a
      k8s-app: metrics-server
  O$ R9 U2 w, w  strategy:
1 _" @' j& n( G8 i    rollingUpdate:, {6 L' b& v8 r* n1 t. T
      maxUnavailable: 0
* L0 j; b: i. i, Y  template:! i" S% b1 W* F8 s. m# o% r% w. B
    metadata:
0 |. ~1 T0 C2 {/ f$ a      labels:
0 n+ ~* [) k8 w1 h7 u        k8s-app: metrics-server
  b+ }3 x0 C4 f2 v- C- |$ C    spec:% K3 P% D2 S# w5 }$ G  V3 p# o& M
      containers:
4 G/ f% U  M0 R! E! L* N  W      - args:; B7 b, \9 F3 K
        - --cert-dir=/tmp% @  j" W4 j2 I7 ^4 b5 Y
        - --secure-port=4443
% y) B) e& j7 D: l/ A' ]        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
: \! b) G) j5 O. \4 G7 K        - --kubelet-insecure-tls5 d. ]# Q+ S6 P% O0 a& a0 B( H
        - --kubelet-use-node-status-port
8 n( ^3 I8 O2 V/ x8 r        - --metric-resolution=15s, ~" ~: _/ `) H: j$ b
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
1 l! T0 R+ i( Q        imagePullPolicy: IfNotPresent. U5 l9 z2 y, w( ~( k1 f
        livenessProbe:/ U* k* N9 z' \2 B# d/ j" C, d% D
          failureThreshold: 3
/ f) `- f: b( h) d8 F8 x8 f          httpGet:" _6 K6 _2 m' S
            path: /livez
- T8 L7 a" Y$ v5 h3 S* Z3 e            port: https& q! Q* k4 ~# S8 _! b* o3 m0 U6 q
            scheme: HTTPS
! x, y, t+ j* ~- C  F  V          periodSeconds: 10
3 _/ f$ L# B9 Y7 H        name: metrics-server  P% I4 ?2 [" F/ j4 p9 t
        ports:5 c1 l* x3 f0 _) Y& f
        - containerPort: 4443
. r* V& l, t4 r' {, D$ n8 b          name: https0 n% u( X, J+ c% E) M# ~* X$ P: i
          protocol: TCP
- w, O5 Y. U7 j$ q        readinessProbe:- f" t( B  H/ P6 N1 `6 Z: T  x
          failureThreshold: 3
( {/ o- w+ W6 z+ f# V+ L* t          httpGet:
+ k- ]' r% X$ e" u- ~3 G            path: /readyz
. z! |, q6 C$ e  u$ y            port: https
: q5 d1 V8 j5 P7 L3 o7 i$ O7 l* p            scheme: HTTPS
& G- ^8 F* i: X  w/ c' _          initialDelaySeconds: 201 c0 z' a; G0 t6 |
          periodSeconds: 10( R/ f* f! i* y  b7 W
        resources:& E! @$ T  I6 G) V3 z- p# c& ]
          requests:
  l6 f+ R/ n: {% R* b$ \" {            cpu: 100m
1 r8 T! g1 {* ~$ W' n7 o* U* K: F            memory: 200Mi" K8 F* ]4 U' p7 h" ?9 {. e1 U
        securityContext:
7 o+ o" t) a3 V* I/ l5 N          readOnlyRootFilesystem: true
% Q" X8 N- ~  ^% n          runAsNonRoot: true
6 ?  {& u' u1 E. G* ?0 p3 N          runAsUser: 1000
# J% q0 p( R& @( y* W3 ?7 h        volumeMounts:
( F% e7 u& }; m1 g7 A4 r        - mountPath: /tmp
6 O/ v4 d+ a2 |0 c  j( I- W          name: tmp-dir
( @+ ^6 ]/ L- m' x8 J5 T      nodeSelector:6 R; Q- o0 b- `6 {
        kubernetes.io/os: linux
9 g* T- M; i, Q5 `1 D5 N      priorityClassName: system-cluster-critical
4 {9 G! ^0 D, z4 d      serviceAccountName: metrics-server
8 D$ V! s& T( F5 a% v3 o) Y; E7 P      volumes:
0 m9 O3 B: m" I" U4 W      - emptyDir: {}
9 f6 `7 U1 L0 B* `1 T        name: tmp-dir; T/ ?. w% F5 Z7 a
---3 o& u. r) E. y0 k
apiVersion: apiregistration.k8s.io/v1
) i( D& K( p5 \kind: APIService; f* l  U( }& H- j& K" I
metadata:
' h7 B4 D) l( r% @; O5 w. l8 F  labels:
8 |! T. ~- W! j6 C: B( \    k8s-app: metrics-server/ x- U1 \/ M# Z6 L8 H
  name: v1beta1.metrics.k8s.io
& n% r. {  S% u2 B& V6 L. rspec:* J5 ~3 @6 O/ {/ }9 V+ ]0 V2 {
  group: metrics.k8s.io( [4 P% s$ p3 C
  groupPriorityMinimum: 100
8 q  r4 q& {( H- D  insecureSkipTLSVerify: true
) ~5 b8 R  m' D9 u5 C5 |8 U  service:6 I& `3 N0 O: @9 o
    name: metrics-server
0 Z3 t" W6 {% a& |7 _* }' i    namespace: kube-system
, z  C/ R, a; v3 r) b$ i# [  version: v1beta1
$ H/ m) B" ?% o8 y. y  @4 H8 f  versionPriority: 100
0 o" v( z/ T' q导入 metrics-server 镜像) {/ H! v# v8 v& U
for i in 192.168.91.19 192.168.91.20;do \0 `6 m* Q7 \/ D3 Y* y8 j" J' b
scp /approot1/k8s/images/metrics-server-v0.5.2.tar $i:/tmp/
& u1 |! I0 d; Q: kssh $i "ctr -n=k8s.io image import /tmp/metrics-server-v0.5.2.tar && rm -f /tmp/metrics-server-v0.5.2.tar"; \9 m3 ]- G' \  B( ]. u7 Y! M
done
! y3 l! {. o5 o2 s' M3 E* B查看镜像  F+ f( r4 t1 t# y  O

1 O9 P, e! Q. t8 \1 e! pfor i in 192.168.91.19 192.168.91.20;do \1 e, i9 Z/ i+ s& \$ ~2 S/ P* a9 R
ssh $i "ctr -n=k8s.io image list | grep metrics-server"; \
# n* i2 P0 x( x0 v. g9 idone
% g; f1 v, ?' \) ~3 q. w. p在 k8s 中运行 metrics-server 组件- r. E, v" z  S
kubectl apply -f /approot1/k8s/tmp/service/metrics-server.yaml  w9 \7 Z* ?& H) J' h* x2 R* Q
检查 metrics-server pod 是否运行成功8 v  w1 ]9 L  s
kubectl get pod -n kube-system | grep metrics-server3 J/ S3 S. N/ g3 c
预期输出类似如下结果9 E7 T" q1 |! v+ Q/ [3 f
9 S  b; W2 B! M6 Y
metrics-server-6c95598969-qnc76   1/1     Running       0          71s
* j! |7 c& l$ T验证 metrics-server 功能" g5 X# {9 E, T; s+ @

5 b" H2 ^, l) C查看节点资源使用情况
5 n' n% S7 N) z" x) d7 P" g
0 L. t, f: R9 m3 Gkubectl top node5 u8 ^, y' q) v2 L
预期输出类似如下结果
+ p$ @! ~4 o& \2 n: c3 q6 Z8 \  X: L! |6 w
metrics-server 启动会偏慢,速度取决于机器配置,如果输出 is not yet 或者 is not ready 就等一会再执行一次 kubectl top node
; P! n4 e5 k! O% K
+ f. u+ R6 [6 k$ A  A3 JNAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
# o! M# ?% j5 z6 @+ O# w  a7 ~192.168.91.19   285m         4%     2513Mi          32%8 \+ n6 j, S8 v9 u$ p: g' Q
192.168.91.20   71m          3%     792Mi           21%" n6 K% u; J4 x. Q; z' d4 x2 e$ g
查看指定 namespace 的 pod 资源使用情况
. E2 J2 r9 ^6 V
* M" \0 f  J" }7 U, Qkubectl top pod -n kube-system; g  ^# {, B9 \( A$ G& v' y
预期输出类似如下结果8 G; C2 |6 x. W2 N8 s( N

: W8 ^; i, Z% _* VNAME                              CPU(cores)   MEMORY(bytes)
: E& A( l9 t4 G7 B6 h- X( Ocoredns-5fd74ff788-cddqf          11m          18Mi
! H3 }3 F7 u8 b! |3 Akube-flannel-ds-86rrv             4m           18Mi
+ e5 T/ ^1 j1 V$ G. @kube-flannel-ds-bkgzx             6m           22Mi" \1 o+ f' A# `! r& i! O
kube-flannel-ds-v25xc             6m           22Mi7 _7 N0 Z  c* V( a; {  Y
metrics-server-6c95598969-qnc76   6m           22Mi
2 ]3 w9 c* S1 f- {+ d: [部署 dashboard 组件$ }+ U8 }1 {: x$ f, T8 ?
配置 dashboard yaml 文件
8 A$ L2 }( f- b' H. f0 tvim /approot1/k8s/tmp/service/dashboard.yaml; Q  F) g" r* I* ]  T- ]% K
---
( R: }% n5 L9 o/ X3 r% q% L' h/ SapiVersion: v1( L* X$ u0 R5 x4 N* ~5 g
kind: ServiceAccount
% g) X" I- e8 o8 ametadata:
$ @4 _/ L2 F7 W1 O7 q  name: admin-user
$ L7 [$ B; {  I& ?- _4 ?5 ^  h  namespace: kube-system3 E  F, s: b5 O! \7 G+ F

% @( M/ H/ @: J9 R---- ~$ E2 y$ A: v- e1 G
apiVersion: rbac.authorization.k8s.io/v1
6 M) V! p: i: tkind: ClusterRoleBinding1 ?5 ?* i8 e" i
metadata:+ h2 [+ p0 ~) n9 E) ?
  name: admin-user
2 y5 q; z# f6 a* A4 e9 J; n% N6 OroleRef:
! x% o! ~0 J* C) L/ j! ]  apiGroup: rbac.authorization.k8s.io
& @! W) O3 g# P2 e. _- E" I  kind: ClusterRole
1 G) j9 e1 f5 m6 j: [7 X  name: cluster-admin
7 Y. [/ w7 u+ y6 |0 [; U7 W1 ssubjects:
+ r1 K$ g7 k3 a0 z/ B- kind: ServiceAccount
) V9 C& A. A- p3 e$ `4 C: O% r  name: admin-user
) D6 a4 Y4 r' n' U) ]9 V$ |  namespace: kube-system+ \1 L+ |* k7 U4 h. J5 C

4 ]7 ^. u* N; q' f---
0 Z4 E+ D5 S+ [/ {8 n* dapiVersion: v1
: _0 R' {* o8 b6 F) D+ g, R, N" Hkind: ServiceAccount: T4 k# ]" ~) y; T  \
metadata:
" U& t. m. Z% S+ P+ v# |  name: dashboard-read-user& i" p6 g, z5 @/ e
  namespace: kube-system: A, _: F: l( O1 G

) X; [0 I' n- G. K! u2 l---0 G0 r( `- m& P  {2 J1 `, a
apiVersion: rbac.authorization.k8s.io/v1) U  ?/ E/ b7 _% U/ S
kind: ClusterRoleBinding- N$ u7 L/ h' j% \1 P+ e& `# j  f
metadata:  z0 E- v4 U4 g- Y5 b5 g% m
  name: dashboard-read-binding! n2 t1 {& v; Q& h: ~! w. p6 @: e* d) t
roleRef:
$ [* g* R/ Y$ v; F) l' J  apiGroup: rbac.authorization.k8s.io4 ^+ i* i: ^; q9 D3 f; {
  kind: ClusterRole2 h* ?' \0 M- B& D( n
  name: dashboard-read-clusterrole
. E: r1 l) b. b7 B& b, }subjects:7 u3 s$ E( g5 G( ?( t3 w
- kind: ServiceAccount) Q8 P8 B. j, \5 I; n1 ?) |
  name: dashboard-read-user
  Y# Q$ w. f% W, d1 i  N2 v( f# [/ }  namespace: kube-system
, T2 F* r' d' Z  [5 a- F/ _1 `' s! U( `
---) ^/ |$ B4 v( o1 _
apiVersion: rbac.authorization.k8s.io/v1
" Z8 E; o6 ?( O, {3 @3 ]9 xkind: ClusterRole4 R# c; Q3 x/ V! a" A, @
metadata:7 d& r6 _3 @4 `9 T# A
  name: dashboard-read-clusterrole2 p- K* x6 l9 a- L( R1 y
rules:/ F  c  K4 Q* ?. O, g/ Q* F
- apiGroups:
& ^" p1 F0 \! p% n. t% C  - ""
9 Q3 ^5 Q/ B& P% g$ I) a& r  resources:9 K/ w! u5 e" w8 i( J  x8 x
  - configmaps
/ r2 }3 a3 _, d/ `& ^* x  ^  l( l  - endpoints
0 Q5 X5 c) }3 {  - nodes$ y' S4 g" E" m. U0 I
  - persistentvolumes
  ?1 z2 g, L! P- _  - persistentvolumeclaims2 v/ U. O/ C' @; x+ K" w) @5 u
  - persistentvolumeclaims/status6 ~- v- E% f" d% Y1 l
  - pods) z) M/ G2 L; R4 b6 J; G( f
  - replicationcontrollers
: E# h# K5 z1 G  - replicationcontrollers/scale
& l- x0 x3 w. v" i3 n* a$ o* A  - serviceaccounts
. c  w; M3 p9 ~: Q) e" [  - services$ |/ A8 h7 ?* c1 f
  - services/status% q9 }7 g5 Z% @% e4 U' h2 J* f9 C
  verbs:
/ V6 F: ?# s: z# X  - get! S( Z- d/ R$ X
  - list
! ^6 J" _: e2 e! {+ R  - watch
: @- L1 t; V( c8 H3 }0 ?" i- apiGroups:
5 Y+ K! W! W! n) p# s, l6 \  - ""
2 P: [% m. z$ N* ?6 S  resources:
; r! s* p( t" l6 ?0 C# x7 {4 O: u  - bindings9 r9 ^: [- d& ?+ q8 G: d, ]6 c
  - events
' ^3 w' x7 z* H  d* o/ {7 [  - limitranges
+ S, e2 k4 X+ p. ^' ~: U  - namespaces/status7 h6 G4 ~' v4 T2 v9 t4 ]( o
  - pods/log
5 b* y' p' H9 _1 N% ?2 w/ N  - pods/status6 M9 [& P' Z* L. z% P: E
  - replicationcontrollers/status) E5 B: l6 u5 {9 C2 M
  - resourcequotas+ S- \9 [9 D& r4 q, R% _
  - resourcequotas/status' G2 {/ B: `/ |* l# l
  verbs:
  ?1 |! [; _! h. X; t  - get
& X* U+ J. @9 ?" O  - list
. ~0 {" n' Z1 _% B' Q# U0 Q  - watch
' F! k9 |$ w/ ~- I- apiGroups:
: b& G( n( b, d. z! T6 ?6 c  - ""
% t' A4 z0 Z3 e7 b  resources:
; B) }( P; L, V  - namespaces
& D/ @7 S1 [: m  verbs:
* F" n- H, t( e: K" p( _- w, w" ]; v% l  - get! N0 `# i9 P. _' I: p; R
  - list. g! K* G7 `  T! {, Y7 |1 `
  - watch
8 d& h( Z5 V  t$ ^( @6 r- P- apiGroups:
+ U; [# G6 M2 C* d3 t# Y' V  - apps, d1 w9 _/ Z- D# n/ }
  resources:
2 |# Z0 Y: u5 f  - controllerrevisions  F" [/ ~. m- J& c+ V
  - daemonsets+ L4 ^& @2 i0 D# l6 \- O$ f- t+ F% V
  - daemonsets/status
7 Y, T- ?8 S- g% }' W# ]' E5 e" i  - deployments
1 f( W% ]4 w! x. g6 F$ k/ H' G9 f  - deployments/scale, U$ ^5 [% \  q# f- T, @
  - deployments/status
" U; _, X. C- P. `* O: p1 _  - replicasets
3 d: r' V: p2 E+ ^8 x. Q7 q: S  @  - replicasets/scale. C  E# s/ y/ d$ \4 |) t- g/ h2 `
  - replicasets/status; t' |9 d6 N1 t, U3 z& h
  - statefulsets0 q  Q) k3 v. s' ]+ }
  - statefulsets/scale% x" `, d  L; Z2 F* q! C3 U4 v
  - statefulsets/status! K& [/ h6 P+ m( I
  verbs:
7 X$ A  Z# H( A, l  - get
* D2 X" W3 W6 i; k% \  - list
3 [( S# ]# i" ?' D/ W0 ]  - watch
, @: T# X/ U+ b7 X$ S1 I% V3 U( D7 J- apiGroups:
0 e% d; f7 K5 g  r5 w1 Y/ S  - autoscaling
! |- i9 Z* E: ~( K6 W% C  resources:
8 T8 ?5 t8 `' P  I/ i0 ?" F  - horizontalpodautoscalers$ e, `# t: q* ?" C( J6 H, ?
  - horizontalpodautoscalers/status" {* A: Z8 g1 x) E8 d+ n/ y8 k
  verbs:1 j' [% k& U, A
  - get. C, z) ]3 o5 B; Y- N
  - list
0 k! f8 _9 d" Q1 k# j/ g  - watch) v1 w4 o6 x" V. `+ {! n: C
- apiGroups:
+ Z2 T) I3 ]4 z2 t. P# v  - batch
0 l2 V' f+ [+ e( a3 Z; s: D  resources:
2 s5 l1 u6 E! p; B& z( K  - cronjobs8 E( K( o* [4 E, y5 f( V( e
  - cronjobs/status
6 L1 g5 @. O# S0 Q4 R' X6 ^  - jobs' m0 H3 W* D: i$ _
  - jobs/status
  O2 X% \) W5 ]+ `: u  verbs:
) q" Z& _4 j# S, @- y" U, ]# u5 b  - get# {) |7 y" d+ v% D+ H! Q# k
  - list
9 C) z& z3 Q( u9 m  - watch( f; e! B* M0 W  d" x  r+ y* K
- apiGroups:
( r. j% b( z$ Y# E  - extensions& P7 s$ e/ v( s8 m
  resources:  n( `; h8 w0 ^8 [3 T; J/ A
  - daemonsets: M6 q* |7 W! H( `
  - daemonsets/status- O9 g5 m( k6 m  [2 X
  - deployments
, \( Y6 l) _2 Q, I' \+ J6 X  - deployments/scale4 ~2 h7 e# q4 e! @, }6 ]8 ^
  - deployments/status
0 J$ q2 P9 M2 P( \" T  o  K  - ingresses
$ ]6 A# q  P0 U8 V; T0 d* K$ A  - ingresses/status* V1 G& C% W& o
  - replicasets) B* s: w4 |# @( B; v; m
  - replicasets/scale, C' f4 C2 [6 o5 k. S/ I6 T- B8 O
  - replicasets/status/ ~9 l" M/ ?" u8 [( {1 e
  - replicationcontrollers/scale
# {; K. H5 g/ v2 c; S  verbs:7 i6 p& }$ |7 k# M
  - get2 C1 ]) Z9 v4 K' L$ f) r
  - list; `. p; X+ Z# G% p! q1 n1 C
  - watch
) {# e. s" j2 e( y1 R- apiGroups:
% r0 M- B7 ]3 a7 ]9 z: L+ d+ `+ B  - policy9 T7 n! P/ u. |
  resources:  u5 Q. D  A4 M+ t: \# H- _5 E
  - poddisruptionbudgets
1 ~3 a1 b5 j. n8 l9 X7 x  - poddisruptionbudgets/status5 o8 ?  E. ]; r4 }1 k5 z/ |6 l
  verbs:
- X9 o2 r' j# L  - get- N/ _+ R8 C; l0 e, m
  - list7 \3 l7 q* b; ]: @2 h
  - watch
% N  A7 C% E+ ~) T" w1 C- apiGroups:
8 q4 g4 C) @/ I" ^( N! C/ y  - networking.k8s.io. W/ G& M: E2 C: K5 h+ L
  resources:
) l( \$ C) ?4 w: z. o  - ingresses* |' K& d2 v3 d% Y) t5 c7 W  J
  - ingresses/status
/ E/ N2 [' P2 ]& j  - networkpolicies
5 r) B$ y4 W& r/ n4 o7 I  verbs:
5 T6 |- w' Q0 J  - get. a: I0 {# Y0 k7 D
  - list
' w6 R' d! E+ V1 D9 u/ e! A  - watch# W5 o) ^/ y4 p/ G
- apiGroups:
% }  ]" n! n$ w# g, \+ h& [  - storage.k8s.io
. l" m& Z2 b% p3 G5 E$ I/ E  resources:
$ p8 z. R' @4 H! k7 n5 ~3 }! F! A  - storageclasses( y; f- m) U: [3 t! w' b/ w
  - volumeattachments
+ k# e* H( Q" B$ _# I/ v  verbs:
! _+ f* q4 s) X# w( _: `% c* \0 Q  - get- F( A5 ?5 n. D2 |
  - list& _- s. B% U1 {* l" e" m" `
  - watch% }/ a- u0 q# r
- apiGroups:
- T/ k; C, S. R& S  - rbac.authorization.k8s.io8 F$ o6 E/ _1 b9 t
  resources:
5 Y, {7 g" e% R# j) K  - clusterrolebindings
" x! s' D" D; m  g  e* N  - clusterroles% E9 K% m. p  }( O
  - roles
1 o7 h, p" P. H! ^; Q2 k! f2 L0 b  - rolebindings
. s! G, r/ f* ~1 f, s$ b  verbs:
( j4 l6 T- K+ x5 i! X5 ~4 G8 k2 [& V  - get  s$ l; l# C; L/ L+ |" g( V9 n
  - list
8 S7 ^4 n8 \* D: q) h' |  - watch# R# p9 j* E- @

; }$ R0 ]/ _' T  t: [( J---5 n2 F4 ?: \7 O3 g: l
apiVersion: v1
3 h: d/ j, g: ]) z' l$ R) bkind: ServiceAccount8 i2 r( b" N" d
metadata:
/ {9 O+ J: a. t9 \) V, r  labels:
  ]" z% z0 E; m! p    k8s-app: kubernetes-dashboard$ @5 J3 ~+ G+ w5 p
  name: kubernetes-dashboard
; Z6 g- w1 I! a% D  namespace: kube-system
- O/ r* Y* q  d3 G& A1 f& L2 K1 g; g3 b* \+ j9 t) k
---' T+ \1 S( k, A5 L1 ^7 p! n. d9 ^
kind: Service
  q2 V: e& b3 r  WapiVersion: v1
6 O( g6 c; v' a" g" gmetadata:
+ j3 v( L% D; p/ W& C  labels:
: R' y, K% s2 ?0 A    k8s-app: kubernetes-dashboard, G1 K* k2 r! C- A1 g
    kubernetes.io/cluster-service: "true"2 X  `& t6 ^* Q" b$ q
  name: kubernetes-dashboard
$ S1 `4 M& j6 s: q7 h. n  namespace: kube-system
# A6 o, r: s; W) d# R/ Y6 Uspec:
" n( p& ?7 W3 z( @8 Y, _0 \+ ^( g  ports:
. l4 A# h, h% c* m    - port: 443
& C0 G  z6 b+ B- S3 h4 l1 ^9 S      targetPort: 84430 z4 R1 C  @. T  o& L, D9 G
  selector:4 V/ R- Y8 E6 @
    k8s-app: kubernetes-dashboard" \- {; U" J5 `& P" q: \( }
  type: NodePort& x0 K* }1 T8 T1 i: X" z

% v6 F; r# B1 Y* H. S/ T# n5 I% F---( r9 l, x; {4 c
apiVersion: v1
! k6 Y9 b4 l6 t9 J% Xkind: Secret( _: B! X4 ?4 F" J. X! }3 {! W1 ^
metadata:2 E- @0 z5 `+ n8 t
  labels:/ H* J- ?' r3 n4 V- l- w, z6 i5 C
    k8s-app: kubernetes-dashboard
* e9 t, Q0 R( t1 b7 J6 \4 I  name: kubernetes-dashboard-certs' T. U+ M2 G1 m# O
  namespace: kube-system' N) t: p- s; d8 f
type: Opaque3 s  r, i3 |6 \0 N5 X' `8 T; I
8 v+ T/ J2 F. m$ K, h, K
---
1 l2 Q. J4 C- Q# {" W9 `1 j: }) a, gapiVersion: v17 C+ V% s) j: L' {1 i& P
kind: Secret
. L5 M& N9 E1 ?! k. C) Hmetadata:4 g0 l4 h0 G4 D4 L1 f0 g- x
  labels:* n$ `( a9 @1 N
    k8s-app: kubernetes-dashboard
+ z, E6 B- Y& t3 P9 h/ ^  name: kubernetes-dashboard-csrf
+ z% _0 x4 v: T  namespace: kube-system
$ y/ z5 O3 e1 qtype: Opaque$ W+ v. M+ n8 n: e9 Z
data:
  ~$ K4 R3 i, p) p& i2 R  csrf: ""
5 ?% Z" k, U' g. g- J4 R% y+ Y  G' D- A. D5 c  l& w! L
---0 J$ \7 r- S# n0 J/ V
apiVersion: v1+ }. U& D0 B5 k  k4 m5 H0 J& P
kind: Secret  }, P7 _% m6 r# o
metadata:9 C5 E/ V" H' F# S1 f7 h( |9 H
  labels:
9 g! p- _( M9 T( V8 w    k8s-app: kubernetes-dashboard
/ D$ [8 \, s, K' K: }! V  name: kubernetes-dashboard-key-holder% G$ z& P( |- o) w. [! Z. d8 E
  namespace: kube-system
, H/ x- u/ _. g& s- D# Etype: Opaque
" S" T. ^8 G! i* }! f
! u1 u, S  y+ _3 B---( T$ i0 ^# _' B4 [- q" t8 v
kind: ConfigMap
8 R% O% G- R9 JapiVersion: v10 A% ?  O) r/ j" B! B$ a0 `" O
metadata:
; E+ o4 z; h4 ]/ _5 P  labels:' i% y0 T, b4 U9 X1 W- d+ M
    k8s-app: kubernetes-dashboard& w3 r$ j1 F# M5 v+ A7 W" Q$ S
  name: kubernetes-dashboard-settings1 T0 b' a# v' t# N$ O
  namespace: kube-system+ Y  j! z' m7 L+ o$ a

% P; b5 S4 r! P' \% v# ?---
  x! |0 W' c: u0 R3 F& Vkind: Role
% m+ Z  N" |8 w1 O& PapiVersion: rbac.authorization.k8s.io/v1! ?. X: L5 P8 Q. j& o
metadata:
4 @1 y# L& N: A" U9 q0 h; z  labels:
$ |$ k* U7 b2 \6 r2 q3 N* t' g    k8s-app: kubernetes-dashboard
. \" `1 b0 L3 Y  m  name: kubernetes-dashboard7 H8 O6 J( V4 u  q- [8 J4 w
  namespace: kube-system
2 s8 d' F: p: O- H/ Lrules:6 t4 `+ M$ J7 c: N- x3 d
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
" O) J1 O. h( ~! c& V2 j  - apiGroups: [""]1 G7 H  g/ ]) p- f6 `0 w( F: |
    resources: ["secrets"]; I# q8 u4 T( @( X9 K
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  a  j( I) Z" D# F* i" \    verbs: ["get", "update", "delete"]
+ d; A8 m0 A7 W8 |3 f% a    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
. V* X# L4 ^+ R/ w* v5 O* f! X7 v  - apiGroups: [""]
* E& u4 R6 Y6 s% X" B9 Y    resources: ["configmaps"]' i" b. R( t0 S- r7 [' R
    resourceNames: ["kubernetes-dashboard-settings"]9 I$ x1 |9 q7 d4 t$ t  s& z" n
    verbs: ["get", "update"]
/ V  {/ n/ x( |) ~( [    # Allow Dashboard to get metrics.
$ e* r& X1 D, M( E, k, G8 a  - apiGroups: [""]
; Q4 g/ i& y% v$ b$ X8 s. E    resources: ["services"]
8 I) Y( I- Q' P' c/ f7 ?    resourceNames: ["heapster", "dashboard-metrics-scraper"]8 C3 H) }$ i5 m" @3 u1 I3 R8 w: @
    verbs: ["proxy"]
( [3 R4 v5 s4 u% E( M  - apiGroups: [""]' c3 |7 V3 x0 h- R0 N: J4 S. W* N
    resources: ["services/proxy"]+ b  m8 C4 ^+ x; b/ F0 W0 q1 r( i
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
# F  L6 h' C! Z    verbs: ["get"]; C. j, g" a4 q) l

" T/ G$ j( r% ]8 ?---4 _6 G/ l, V% J1 ~! P9 B, {+ G0 W) Q
kind: ClusterRole
# H, U7 o3 z" Y6 napiVersion: rbac.authorization.k8s.io/v1
8 ~* ~" u8 t0 x* O( V+ @- _metadata:
' P1 ^4 Z2 y6 L, M+ g0 u- n  labels:+ u, i. A. K, @* [: [7 f
    k8s-app: kubernetes-dashboard
% a% X5 @& r4 T+ f. ^6 W  name: kubernetes-dashboard5 N/ @! Y5 M0 N- ~% W" N, Z) u
rules:
* _. ?) A8 W% q0 Y5 x5 P5 ]5 R% a  i  # Allow Metrics Scraper to get metrics from the Metrics server/ e$ g+ b, Q1 }' q) l
  - apiGroups: ["metrics.k8s.io"]) h$ B$ j/ G6 X5 f& n
    resources: ["pods", "nodes"]3 N) Z7 a# Y) X
    verbs: ["get", "list", "watch"]
1 Z( b+ x: x  Q9 T2 y- e% b+ j
; X; Y- b: T$ u  y" M( R& R---9 |& v5 o, A! t; w3 n4 l8 v( f
apiVersion: rbac.authorization.k8s.io/v1
( O0 ]& ?0 s7 bkind: RoleBinding
3 A& j0 v$ Y5 Q7 w( s7 Jmetadata:; y  T% B+ U% Q1 [4 @8 D! M/ w5 e
  labels:
( q8 _5 U; R% _' J    k8s-app: kubernetes-dashboard, N2 m0 y# K2 F  \5 _& O
  name: kubernetes-dashboard6 R2 f4 q* ~4 W# L0 N( o
  namespace: kube-system
* x$ G  H3 }4 V- i+ L8 s' v* \roleRef:' t5 Z5 m0 }/ \$ S7 Y" r
  apiGroup: rbac.authorization.k8s.io. l3 K6 h$ B0 E  y
  kind: Role2 ^! J( ^5 f& T: D. Z
  name: kubernetes-dashboard
  [( b* I# e. {9 b) T# Csubjects:
8 D% k: N/ H% A, C0 f7 f( L: H  - kind: ServiceAccount
1 _& }  B5 u5 U    name: kubernetes-dashboard- E9 E+ j1 v. W1 X: m2 q
    namespace: kube-system& O7 P6 F' D5 W) z6 Y! R2 A, W9 ^
2 }  y" X& M) z2 L1 b% R& |) i3 Q4 }
---
  d9 O" T1 e: CapiVersion: rbac.authorization.k8s.io/v1* n3 H; Q6 B, ~$ `9 i( ?2 P
kind: ClusterRoleBinding
) b' |4 P6 V# Y: H9 L0 p( Vmetadata:
: Z6 r0 h( N- i. u4 ^  name: kubernetes-dashboard) u8 q1 P" G" Z3 L/ H9 l+ w
roleRef:
7 Y" `" G, M7 P7 U# P& h" k  apiGroup: rbac.authorization.k8s.io
2 {3 s& a+ `- h  kind: ClusterRole
  u" N/ v; E- E8 ]- Z  name: kubernetes-dashboard; ]6 L8 V& q4 e/ }6 x
subjects:
7 Y2 `9 U, `+ o/ s7 A4 T  - kind: ServiceAccount! C" d( L2 s' t, \: j
    name: kubernetes-dashboard1 h3 N, z+ d0 v2 g! b" M
    namespace: kube-system
" o: X6 \$ Y; z# k/ }+ b* H* @/ f( e0 {" D3 q& V
---' M% R6 q9 q) _% j; {# n
kind: Deployment+ Y' b# _. n. k
apiVersion: apps/v1
# X) ]4 r4 ?/ H' }5 P$ M5 U' ~9 Umetadata:
6 v' \9 b# B; n7 ^" U  labels:
- o' o3 i# m5 m- Y6 |    k8s-app: kubernetes-dashboard
3 y. b1 D/ X! O  _  name: kubernetes-dashboard
+ b# g5 S* j2 K- w6 S) A) h3 `4 K  namespace: kube-system
2 R2 Q8 D: c: s+ U8 F6 I$ a5 ?7 Sspec:+ _8 N9 a3 [9 G8 n# M) q2 l
  replicas: 1
* Q7 |) I' ?' h1 w  revisionHistoryLimit: 10
: o0 k! g! S! a4 u" G  selector:5 ^8 c9 Z: g# v4 D& U
    matchLabels:# K- {9 X& r9 m5 T' _/ X& o
      k8s-app: kubernetes-dashboard6 @/ ~& |! }3 i5 @8 V2 O
  template:
/ \' a/ T  Q( n( K0 W6 e2 \" t2 C    metadata:
9 ]$ ]3 p/ E& w* v2 a) S      labels:" j; s0 j) s% z( V
        k8s-app: kubernetes-dashboard
! O4 ~" h- W9 T1 N1 b    spec:! @7 ~4 m, }& X/ E1 B1 O
      containers:8 C5 E0 d, Y$ v2 {/ T7 `
        - name: kubernetes-dashboard. u7 f9 U' B$ z. h3 B% d' A
          image: kubernetesui/dashboard:v2.4.0
; H  y# z8 v; M9 L          imagePullPolicy: IfNotPresent8 g$ Y" Y$ x) W" j4 @
          ports:
" F+ s1 v( y2 ?            - containerPort: 8443
2 g4 D3 j, r5 M" x; P. w6 C% Q              protocol: TCP+ F% r) E& y3 u8 W6 m
          args:; z, l  }+ h: ]- {+ f1 D$ I  {0 c
            - --auto-generate-certificates8 {' ^" }& F" B5 {2 _
            - --namespace=kube-system5 B$ |6 [- K( e  Q
            - --token-ttl=1800+ Q4 h7 p. v  G. |3 F9 `) ]4 `7 @/ f
            - --sidecar-host=http://dashboard-metrics-scraper:80006 h. E, X1 M3 P- w8 Z; g
            # Uncomment the following line to manually specify Kubernetes API server Host  U2 {3 S/ [2 t: W
            # If not specified, Dashboard will attempt to auto discover the API server and connect  {/ P) v; j$ G  {' R! W
            # to it. Uncomment only if the default does not work.8 E0 N0 V. b. O8 j9 u
            # - --apiserver-host=http://my-address:port
+ Y; i/ J2 A! a5 j6 P( M          volumeMounts:  i( d+ o& B- b7 p4 q2 A
            - name: kubernetes-dashboard-certs
. _9 F+ @9 V0 i7 F6 m3 K              mountPath: /certs4 b8 R* K0 |4 E4 Q; L/ \
              # Create on-disk volume to store exec logs
0 u/ i% V" P) z- {            - mountPath: /tmp! p( g3 o# i, j
              name: tmp-volume
# d& s6 Q5 b2 Z6 G( Q          livenessProbe:
; M0 }: V7 {: p; b( M- D" R5 [            httpGet:
9 m) g) H( ^' q# j) P              scheme: HTTPS3 }# w: i+ U: o  A
              path: /
; b% _& n) @1 e  _- M) N              port: 8443
, I3 D! ~' x& `/ [- P3 F+ [            initialDelaySeconds: 30
  n; r- j1 Z; v) @            timeoutSeconds: 308 U% O2 c# C8 P) T
          securityContext:4 _9 i' A0 c" L7 B
            allowPrivilegeEscalation: false6 e9 f$ l2 [, q5 J7 b3 S: B
            readOnlyRootFilesystem: true# Y* ?4 u& u; K+ n4 H; H  A
            runAsUser: 1001
3 [6 ]- ^) ^9 z) Z* k            runAsGroup: 2001) K/ Q2 U) z' c* E7 |" q3 `5 Y1 V
      volumes:% r$ w7 z1 A! J3 P$ v0 ~4 c. ~
        - name: kubernetes-dashboard-certs; U. M) }9 N5 O6 b$ A, \# s8 K  ^
          secret:5 P' j+ K1 F5 I% J/ ^
            secretName: kubernetes-dashboard-certs
4 N- X+ i1 O4 S" k0 X8 J0 Y        - name: tmp-volume
& D) D8 w, h/ y1 ^6 |! O          emptyDir: {}
; L1 Q' l' E: b! J- F3 @  t4 @      serviceAccountName: kubernetes-dashboard' _4 d! i, y3 z
      nodeSelector:
  ^  m0 k( C9 n9 e1 B        "kubernetes.io/os": linux& Z2 W2 W* m" ?, ]
      # Comment the following tolerations if Dashboard must not be deployed on master
2 O! s6 G# }% w) h      tolerations:
0 n/ B2 h# A2 n8 H        - key: node-role.kubernetes.io/master
/ K$ ^" {- t) s* b          effect: NoSchedule& Z0 w( O+ A. _5 g

/ l- h( t8 W( c  t! Z' g1 S---- h" n. `5 g6 C( g
kind: Service, k2 j5 |% C* V$ Z4 y
apiVersion: v1
' Q4 b" p: Z: y+ n* g( fmetadata:
3 Q( R+ N5 |! Z6 C. l6 w2 O  labels:' a2 A8 O$ ^8 l& h# ]
    k8s-app: dashboard-metrics-scraper
- Z' p! J% X6 I: l; G$ K0 O  name: dashboard-metrics-scraper
( ]) W+ r/ {& e. K! o$ R  namespace: kube-system
! A$ Y  J( u8 u) a( d6 Lspec:+ g' z5 w& |) K% N
  ports:2 k  ~  [/ G, ~3 V
    - port: 8000
% G* m, y2 W! C% |  d- G) S- j      targetPort: 8000  y3 k. O- i- ~" q" d1 y
  selector:
' K) D& l' u/ r8 F    k8s-app: dashboard-metrics-scraper
# `. b3 `; V/ ?: F; @. A6 x- a4 }$ d, E" ^$ c5 Q! y# V
---
% j  y0 W$ w6 p% P/ `8 Pkind: Deployment
9 D5 u9 w0 g1 q  XapiVersion: apps/v1
2 v% U6 M1 Z. G9 D. T* mmetadata:
  o, f1 q& B3 D5 Z0 W# p  labels:  C$ U) j" {* Q3 }0 o$ n, W' U$ O3 K
    k8s-app: dashboard-metrics-scraper' U3 e4 c& C4 R
  name: dashboard-metrics-scraper
% v" A" X! O. p+ h/ @  namespace: kube-system+ j# U9 R* Y& y& v; k
spec:
% i% ?$ {, H$ m0 n  replicas: 17 V0 k# d+ v( ^( z
  revisionHistoryLimit: 10
' z/ N1 ~- x6 m4 f' f5 B  selector:: i! K+ m4 ]9 S2 r# v8 G# Q
    matchLabels:3 ]) ?+ O7 `6 n  g/ i- l
      k8s-app: dashboard-metrics-scraper
+ E. d$ A( E; c: R  template:8 f7 h5 T% r+ u+ l6 J
    metadata:8 J1 r1 d  E! x5 R* p
      labels:) A* W+ l1 \, T3 F- I# j
        k8s-app: dashboard-metrics-scraper/ o9 c; a/ n$ g+ m
    spec:
# u4 c0 G) w2 q" H) N0 g8 F      securityContext:
% @' Q# n; y6 P9 T) s        seccompProfile:# X* |4 {% H! P/ j' z# ?+ i( _$ ?
          type: RuntimeDefault
, ~; S& n3 u, Z3 n2 `1 l* X8 z% S( D      containers:+ d$ q. w9 D& I  Q7 W
        - name: dashboard-metrics-scraper- t$ @8 ]( V0 u/ g* R2 F: i
          image: kubernetesui/metrics-scraper:v1.0.74 W  d$ j1 B" H* F5 t
          imagePullPolicy: IfNotPresent
7 D1 @) u. \: }' r# j% ]          ports:7 K! n% V; h& C6 d/ ]9 w; m/ d2 t9 P
            - containerPort: 8000
& w( M# S( j8 _7 q+ O/ P$ w              protocol: TCP( [/ S) g0 j' h) g! N! j* z
          livenessProbe:
/ e7 L7 @% z  ^) k) n/ n- J5 o            httpGet:
% t' g& v$ u. D              scheme: HTTP
, D2 Q4 M+ a  I! t& s1 y0 ?9 S; g              path: /
, x1 @# X: j& B- L2 T) c              port: 8000
% P2 ~' u  v$ S- [! r. m  M            initialDelaySeconds: 30
/ {# C5 `6 s# X; D( ^$ `3 D& k( N            timeoutSeconds: 30
$ G$ `7 y0 C, F- v          volumeMounts:
6 \, s9 ~5 H7 ^& T( {* A          - mountPath: /tmp+ p4 y( b7 L  F  w9 u- |
            name: tmp-volume
* u3 l8 G4 w. h! o( @/ I          securityContext:& S' r' W# ^. j7 r
            allowPrivilegeEscalation: false! t3 O5 T/ A5 |9 W& t! p+ q3 Q
            readOnlyRootFilesystem: true/ B! I% o7 O1 Z  Q4 P
            runAsUser: 1001
: f9 {$ r" W# G            runAsGroup: 20019 L3 b5 G5 c# k4 v8 [
      serviceAccountName: kubernetes-dashboard% \$ I8 N' ?3 ^1 }. |3 r+ k( e
      nodeSelector:1 ]" y; }9 _) \2 `, R
        "kubernetes.io/os": linux, q8 x" j( c9 h$ Y! e5 G
      # Comment the following tolerations if Dashboard must not be deployed on master: x3 @+ o$ u6 \  e4 m
      tolerations:1 |8 P' l  O( z" }2 `" p
        - key: node-role.kubernetes.io/master8 G9 I+ p% u  ]' A4 |
          effect: NoSchedule
( }5 F, u3 n; V/ w      volumes:2 L! V9 f! }) j7 L% E
        - name: tmp-volume
7 D0 t$ b  j& \0 ~/ k  y! l. h  X9 G          emptyDir: {}
6 S! o* v  S/ m/ @# }! l4 \3 g导入 dashboard 镜像
  [0 p, X& v' |' z+ hfor i in 192.168.91.19 192.168.91.20;do \
& o' k% D( u. w! z6 G" I5 escp /approot1/k8s/images/dashboard-*.tar $i:/tmp/. c9 v  H! f# b8 x
ssh $i "ctr -n=k8s.io image import /tmp/dashboard-v2.4.0.tar && rm -f /tmp/dashboard-v2.4.0.tar"; \
9 Q% B" k( r+ d1 j+ |ssh $i "ctr -n=k8s.io image import /tmp/dashboard-metrics-scraper-v1.0.7.tar && rm -f /tmp/dashboard-metrics-scraper-v1.0.7.tar"; \
: Z! W2 C* L4 u' u& ?6 E, Tdone. t4 r; |6 Q6 ^  ~2 _7 M; `+ @
查看镜像( R; C- Q9 L- l
2 e9 `5 e  h' v  ~. ^
for i in 192.168.91.19 192.168.91.20;do \" o; `4 y$ I2 F- K4 K
ssh $i "ctr -n=k8s.io image list | egrep 'dashboard|metrics-scraper'"; \' {6 H& q2 b+ d% M! v- w# h* Q
done  v5 Z9 t! W6 @* x& h. V% m4 [
在 k8s 中运行 dashboard 组件
' z1 x" `! }0 C* mkubectl apply -f /approot1/k8s/tmp/service/dashboard.yaml9 w& d# g! i$ A' g3 X- [# D
检查 dashboard pod 是否运行成功+ l- m4 E" Z" T& G( G
kubectl get pod -n kube-system | grep dashboard
7 m  r$ u; I% ^0 n' B! q. @预期输出类似如下结果7 G  F$ y/ t# z7 H. I' f
0 w( `+ H4 G& N5 u9 X
dashboard-metrics-scraper-799d786dbf-v28pm   1/1     Running       0          2m55s
0 `" \4 l7 A0 r1 }kubernetes-dashboard-9f8c8b989-rhb7z         1/1     Running       0          2m55s
" B' B, {5 @) d$ h查看 dashboard 访问端口# `" }) @1 j% o: o2 {4 M
在 service 当中没有指定 dashboard 的访问端口,所以需要自己获取,也可以修改 yaml 文件指定访问端口1 q6 K/ x, X* V2 U1 M) ^5 h/ \6 g

5 R/ D* @3 T) ^- H8 V预期输出类似如下结果- m$ N: \4 A. G/ ^/ j' R/ K

# ~. ^2 D+ d' {; h$ d我这边是将 30210 端口映射给 pod 的 443 端口
) z; S, O0 w$ j$ J. Q( R$ z2 A* e+ d/ o; t3 Z/ Q3 A
kubernetes-dashboard        NodePort    10.88.127.68    <none>        443:30210/TCP            5m30s# z# O5 m* ?( ^* }8 O
根据得到的端口访问 dashboard 页面,例如: https://192.168.91.19:30210; j5 ^( ~3 k1 I1 b: v2 |

; U5 a! f/ u7 f查看 dashboard 登录 token& r( Y7 W7 C; D  |+ T
获取 token 文件名称( E5 |; w0 M3 W. N& o" u* L
, @" r6 B8 J4 J: z
kubectl get secrets -n kube-system | grep admin
- C; z# v& u4 o; `预期输出类似如下结果1 N: _0 w, V9 a* L8 D: K( u

9 q/ K5 G4 G; ~' _) |/ U' o' sadmin-user-token-zvrst                           kubernetes.io/service-account-token   3      9m2s5 w5 K# N* |( h% _2 i7 C
获取 token 内容
+ ]9 p6 W% U& J0 U/ u& Z8 \( q% ]  U: v' z
kubectl get secrets -n kube-system admin-user-token-zvrst -o jsonpath={.data.token}|base64 -d
8 j: ^7 s3 |3 M预期输出类似如下结果
* {1 L# _" X" j
: c7 B# S; V' Q! A8 L1 TeyJhbGciOiJSUzI1NiIsImtpZCI6InA4M1lhZVgwNkJtekhUd3Vqdm9vTE1ma1JYQ1ZuZ3c3ZE1WZmJhUXR4bUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2cnN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYTE3NTg1ZC1hM2JiLTQ0YWYtOWNhZS0yNjQ5YzA0YThmZWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.K2o9p5St9tvIbXk7mCQCwsZQV11zICwN-JXhRv1hAnc9KFcAcDOiO4NxIeicvC2H9tHQBIJsREowVwY3yGWHj_MQa57EdBNWMrN1hJ5u-XzpzJ6JbQxns8ZBrCpIR8Fxt468rpTyMyqsO2UBo-oXQ0_ZXKss6X6jjxtGLCQFkz1ZfFTQW3n49L4ENzW40sSj4dnaX-PsmosVOpsKRHa8TPndusAT-58aujcqt31Z77C4M13X_vAdjyDLK9r5ZXwV2ryOdONwJye_VtXXrExBt9FWYtLGCQjKn41pwXqEfidT8cY6xbA7XgUVTr9miAmZ-jf1UeEw-nm8FOw9Bb5v6A
* {; c1 }  o# H8 q( y' u/ H7 l6 ?
5 C" y+ U6 N: v/ A到此,基于 containerd 二进制部署 k8s v1.23.3 就结束了  l$ f. ~8 k0 z, g) U; I- G

1 w+ j* V+ S+ U9 Z
 楼主| 发表于 2025-1-1 20:38:40 | 显示全部楼层
生产环境关键性配置/ |9 z0 p# {0 T( T& |
修改Docker配置
1 L: @7 r5 f2 c$ B! j! qDocker配置 采用containerd作为Runtime无需配置7 M6 i8 z+ h$ @0 S
vim /etc/docker/daemon.json6 ?$ f; {2 C" Y- L
{  "registry-mirrors": [2 a* J. f" L# X5 G2 }
    "https://registry.docker-cn.com",/ B. a6 f6 a8 }( \+ x, Y
    "http://hub-mirror.c.163.com",
: J# d+ ?  R+ o    "https://docker.mirrors.ustc.edu.cn"( ]5 Y, X& W, e  @4 j, Z
  ],3 e- S. o) A$ s' Q7 D, A: ?9 f  X
"exec-opts": ["native.cgroupdriver=systemd"],
$ l+ V3 _) u6 Y: |% e# X: [ "max-concurrent-downloads": 10,   # 并发下载的线程数
6 p4 W3 @# r# l) k! z" v5 \ "max-concurrent-uploads": 5,   # 并发上传的线程数3 s( y/ ?; {8 D0 i/ _
"log-opts": {
/ L0 a$ ]! b0 q4 |   "max-size": "300m",   # 限制日志文件大小,到此大小进行分割$ z8 q+ x( G* Z) g- l! {/ l
   "max-file": "2"        # 限制保存的日志数量,按实际情况修改0 L7 w* A: L% X" g$ l
},
9 }6 A8 j. X, \ "live-restore": true    # 重启docker进程不重启docker应用
. \7 [* z8 o! u3 D}
5 c2 W9 x% z6 |6 o修改证书有效期
0 w9 S6 M- d8 r! w通过Bootstrapping申请controller-manager颁发的证书,默认有效期为一年,在内部环境可以设置更长# S- b. g# P1 M7 l$ h) U+ V
' F0 E6 r) {" @* J3 K
vim /usr/lib/systemd/system/kube-controller-manager.service* S  F/ p+ _* }: ]( H* c5 c& w% {
0 a- A6 d' O0 Q) P- z! O
# 设置证书有效期,因为证书最长的有效期应该是五年,设置再多可能也是五年,kubelet会在快过期的时候重新进行 申请' a- I& }9 d- ]& J
--cluster-signing-duration=876000h0m0s \ " T) q- o2 d& e- u$ J! z) y

# ^$ x7 \' T0 C. S! \! [# 在自动申请证书的时候进行自动颁发一个,在新版本中已经默认为true,所以不需要进行配置0 }, ]' |3 p" M9 [# D7 r
# --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \% D2 S! m( O' }2 D5 U
修改kubelet配置文件
* ^/ @) `' m9 ~5 i& j% N, D7 z- Ovim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
0 ^1 t/ \" y! a8 R8 P[Service]
- U; I) a+ ]4 N4 y" oEnvironment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig". `" G4 l8 r1 `" U
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd"
0 \, B2 m5 N  l/ V' l* E5 YEnvironment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"' l: c7 a- q) o1 z. G4 R& n
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m"
8 g+ P. e( @: J! F7 j6 J- A8 FExecStart=9 x. C' T) \5 h' d) G2 I3 e- ^# v
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS- g7 `! n0 t0 W% a: Y
如果公司内有安全团队会进行漏扫,k8s默认的加密方式比较简单,更改加密方式
1 H1 l) @; S+ I& O0 ]8 Z8 ]5 H/ q% R+ `) W  G
添加:--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
1 H5 W( Q% R9 l' `6 F
9 [' F5 Y/ x+ v0 {* t3 N设置下载拉取镜像的时间,下载公网的镜像会比较慢,默认下载时间很短
( `. n# C' B0 i: @! g2 U& K5 Z; T' E+ s5 R8 X" v8 k" `
添加:--image-pull-progress-deadline=30m
6 X6 x7 c/ m8 v0 X+ @. n; i5 ^: d4 y0 V# q3 s
[root@k8s-master01 ~]# systemctl daemon-reload4 b( y5 u3 V' u# ~3 @3 w  m
[root@k8s-master01 ~]# systemctl restart kubelet' H0 W% A" t- k2 ?& N6 j
新版本的k8s配置文件都建议放在 /etc/kubernetes/kubelet-conf.yml ,慢慢的参数都会挪到这个配置文件中,包括上面的参数* ~. [8 R( M# W7 [8 f3 x
( r# _1 r( g" ?9 D3 w7 M
[root@k8s-master01 ~]# vim /etc/kubernetes/kubelet-conf.yml
+ W9 c6 A+ E0 X; ~8 T0 W& q最后添加( d1 Q+ b2 H, Y2 a0 X9 k
rotateServerCertificates: true6 n2 [- w8 V) S+ `1 M- y' `+ c" u
allowedUnsafeSysctls:     # 默认不允许修改内核参数(并发量、文件打开数等)
: v7 A5 t( ]5 M, a0 G. E1 }& F- I9 _ - "net.core*"            # 设置参数允许修改内核,可能涉及到安全问题,按需配置
0 G4 E+ ^, J7 `5 } - "net.ipv4.*"
9 v! E+ X7 H, X+ Z. }kubeReserved:             # 给k8s组件预留资源
* n( |. J+ N$ ?' Y5 Z  cpu: "1"  u9 C2 O# t3 v  m3 B
  memory: 1Gi
* p+ |: ?7 r4 J) N4 G  ephemeral-storage: 10Gi
* w$ M0 U" r  d' c$ y* SsystemReserved:           # 给k8s系统预留资源     0 ]$ \$ l3 u6 g
  cpu: "1"
7 K" j' s1 x. Z2 k3 z3 y  memory: 1Gi
2 l7 K3 `0 c/ S( m4 X( q  ephemeral-storage: 10Gi/ b5 U- k: ]+ k6 b6 q7 a6 |

& ~- Z" H9 t4 y7 \6 q% [1 {[root@k8s-master01 ~]# systemctl daemon-reload
+ O0 T5 h" n1 P" i7 v0 q[root@k8s-master01 ~]# systemctl restart kubelet8 G0 g1 l* ~8 P9 i' ]9 Y; g
修改主机ROLES、labels0 K, w. J. ]1 z3 }8 Z- `
查看目前ROLES为none,修改k8s-mastre01的ROLES为master
' F9 Q# d6 P2 z  L7 n0 s& c, Y$ `% \& V! y0 A0 W6 W
因为k8s对于k8s中的节点属于哪个角色是没有感知的,master节点就比node节点多安装几个组件而已,对于角色ROLES的定义需要人为的区分$ w. n; P9 S! V3 @/ c* M- t

& ^: ~) l8 R; J+ o7 U[root@k8s-master01 ~]# kubectl get node
) Q/ G  r6 M0 {+ t6 r7 ~& H$ z: Y. L, q: oNAME           STATUS   ROLES    AGE   VERSION
* {0 d4 Q% r$ `7 I8 Jk8s-master01   Ready    <none>   19h   v1.23.89 j8 K6 P! o0 K# n: G
k8s-master02   Ready    <none>   19h   v1.23.8
+ l( C. y. R3 U7 Ik8s-master03   Ready    <none>   19h   v1.23.8
' Q8 }' h; j( t/ c, @k8s-node01     Ready    <none>   19h   v1.23.8; _1 j" Z# l( s! E$ {
k8s-node02     Ready    <none>   19h   v1.23.81 `3 j9 O& Y; ^0 L1 u4 k7 D0 g3 G
k8s-node03     Ready    <none>   19h   v1.23.88 A3 Y, @9 y0 E/ d
[root@k8s-master01 ~]# kubectl get node --show-labels
+ D7 q/ ^3 S# p( J  N+ v1 J: E2 \+ ZNAME           STATUS   ROLES    AGE   VERSION   LABELS6 \8 a8 \" o' o. o! ~; q7 T  |  w
k8s-master01   Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node.kubernetes.io/node=# g5 T, {. A- E, r+ n; w) K
k8s-master02   Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux,node.kubernetes.io/node=
; r8 z# M1 f( B" Q* \9 qk8s-master03   Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,node.kubernetes.io/node=/ G+ y# L, X4 }( g/ [) L
k8s-node01     Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,node.kubernetes.io/node=
0 [% y5 m  X  b) Ak8s-node02     Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,node.kubernetes.io/node=
& z+ M0 X0 N; c; I6 @8 p; Fk8s-node03     Ready    <none>   19h   v1.23.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node03,kubernetes.io/os=linux,node.kubernetes.io/node=5 y2 G" U. Y1 l# u% y4 A* p0 L3 I$ B
[root@k8s-master01 ~]# kubectl label node k8s-master01 node-role.kubernetes.io/master=''
0 T* [3 C- J" E# gnode/k8s-master01 labeled/ ]7 P  ^6 z: h4 q- V
[root@k8s-master01 ~]# kubectl get node! V6 a2 [# W! d9 _0 J
NAME           STATUS   ROLES    AGE   VERSION
# f/ X# |& D' ^2 s- ak8s-master01   Ready    master   19h   v1.23.8, M% C  X4 j+ i* r
k8s-master02   Ready    <none>   19h   v1.23.89 [' T0 m' ?$ a) K
k8s-master03   Ready    <none>   19h   v1.23.8) F3 J; j6 d2 {7 J, V4 {6 C
k8s-node01     Ready    <none>   19h   v1.23.8
0 p) |  R5 _( B- b  mk8s-node02     Ready    <none>   19h   v1.23.89 p3 F1 Y5 Y8 [" w9 X
k8s-node03     Ready    <none>   19h   v1.23.8' r( k) v( P! h0 N: ]
生产建议
/ f4 f2 K8 m' d& }+ n( [+ A( Y1、生产环境一定要用二进制组件安装
9 ^8 j4 |" ^2 L2 z6 n! s2、etcd一定要和系统盘分开,必须使用ssd硬盘
, c+ g  P/ t) f! \3、Docker数据盘和系统盘分开,也尽量使用ssd硬盘; u0 g  _7 i: a

# o7 |: z; J. S) S/ N& g) b
 楼主| 发表于 2025-1-1 21:14:06 | 显示全部楼层
cd ~/kubernetes/manual-installation-v1.23.x/calico/更改calico的网段,主要需要将网段,改为自己的Pod网段 sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml grep "IPV4POOL_CIDR" calico.yaml -A 1            - name: CALICO_IPV4POOL_CIDR              value: "172.16.0.0/12" kubectl apply -f calico.yaml+ w, X. U/ G! g- y$ s3 w9 N* ^
2 V" c; i8 V/ d1 p- u

& d4 v. ?9 \- @. {, e; R

8 t7 z0 S& V' L: M! l/ e* i1 C) S1 u% S3 Z" l" h: _( I

2 x( k% R9 i' V: M. ?更改calico.yaml# Cluster type to identify the deployment type  - name: CLUSTER_TYPE  value: "k8s,bgp"# 下方熙增新增  - name: IP_AUTODETECTION_METHOD    value: "interface=ens192"    # ens192为本地网卡名字% E$ m" O: [# J2 i- c# H) x

; P. h* S+ i$ O& k2 l% e
: b/ G+ ~) a, J# @! e* R" p: Y7 t7 q% i# q
更新下kubectl apply -f calico.yaml再进行查看/ o& s) s& z! b5 s
8 M$ R; l7 v' \
! f; i  S. }8 r! O' N

# [7 a$ p: p' |9 }; r& i7 Z: a; ^
7 U0 l" Y$ ^+ T% D/ J

, E2 A8 d# p& n) ?! ?
7 }2 L4 E& N; Q& y# @; |) h
" o- ?9 p9 B' }  T
; y  e8 R! z1 S" Y2 ~
安装CoreDNS安装官方推荐版本[root@k8s-master01 calico]# cd ~/kubernetes/manual-installation-v1.23.x/CoreDNS/如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP[root@k8s-master01 CoreDNS]# COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0[root@k8s-master01 CoreDNS]# sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" coredns.yaml安装coredns[root@k8s-master01 CoreDNS]# kubectl create -f coredns.yaml serviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.apps/coredns createdservice/kube-dns created
4 M/ y( C: u9 w9 S. o* \/ m
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 ( 蜀ICP备2026014127号-1 )点击这里给我发消息

GMT+8, 2026-4-9 01:06 , Processed in 0.124182 second(s), 25 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表