易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 58|回复: 9
收起左侧

部署k8s集群步骤 kubernetes实施步骤

[复制链接]
发表于 2024-9-2 15:00:03 | 显示全部楼层 |阅读模式
购买主题 本主题需向作者支付 5 金钱 才能浏览
 楼主| 发表于 2024-9-6 17:37:32 | 显示全部楼层
kubernetes的yum源6 p2 ^. a# |  ?
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
/ O! W9 |; ~' ]2 ~[kubernetes]1 |3 _, b/ o- F: J5 O0 d0 ~
name=kubernetes
2 _! ^8 d- `" q* C# S: [# ^* dbaseurl=http://172.24.21.35/centos/kubernetes/. U. s5 D& Y$ B5 o, i$ v
gpgcheck=0% R7 E4 x* {+ M0 @" _$ o3 S
EOF
' I, ?8 a) p! F# Z4 k# j. p
 楼主| 发表于 2024-9-9 10:37:01 | 显示全部楼层
kubeadm init --apiserver-advertise-address=172.24.21.55  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.0 --service-cidr=10.177.100.0/12 --pod-network-cidr=10.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock [init] Using Kubernetes version: v1.28.0 [preflight] Running pre-flight checks         [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly         [WARNING HTTPProxy]: Connection to "https://172.24.21.55" uses proxy "http://172.24.118.199:3128". If that is not intended, adjust your proxy settings         [WARNING HTTPProxyCIDR]: connection to "10.177.100.0/12" uses proxy "http://172.24.118.199:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration         [WARNING HTTPProxyCIDR]: connection to "10.233.0.0/16" uses proxy "http://172.24.118.199:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration         [WARNING Hostname]: hostname "k8s-master" could not be reached         [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: read udp 172.24.21.55:51870->114.114.114.114:53: i/o timeout [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
 楼主| 发表于 2024-9-9 10:42:18 | 显示全部楼层
--apiserver-advertise-address   #声明监听ip地址
8 f" P$ a5 v+ a' m  b. E --image-repository registry.aliyuncs.com/google_containers     #指定仓库
3 G/ k+ x# v2 _) l4 Z) r7 V/ T( U--kubernetes-version   指定k8s的版本
9 X- y, p( M2 U$ P" a3 W: B7 Y. Z4 b% j. R$ g
--service-cidr=10.177.100.0/12   #service网段6 _2 ^/ G( T7 E: ?  f8 o+ v# ~
--pod-network-cidr=10.233.0.0/16    #pod网段
" \4 W9 N% U; u9 M$ s2 [--cri-socket   指定docker的中间链接软件
. Z6 G9 u. {& u2 ^2 [% C
 楼主| 发表于 2024-9-10 15:57:57 | 显示全部楼层
--kubernetes-version=v1.17.2
) \! K; Y% a% ]
# k9 \3 V, b, q) V" S0 r版本号,根据自己的情况更改,一般应该和 kubeadm 的版本一致
% m  e4 B/ j; o, e( l& k
1 X: S9 X3 r9 D6 Q* y$ C通过如下命令获得, G, o9 m# e$ O
/ g, h7 N! I5 Q( ~. H) N9 N
kubeadm version0 D4 C3 {$ e6 U( A. b/ q
9 G/ T7 Q% D% G: P
输出的 GitVersion:"v1.20.4" 就是版本号了$ a- x# w# c1 H& y
7 Y/ l! i4 r$ _7 s: ^$ a
--pod-network-cidr=10.244.0.0/16$ ^: x! A$ V! j/ F2 j
8 G2 z9 r9 p% _% _8 I8 ?/ K/ k
​ pod 使用的网络,可以自定义,这个根据自己的情况修改,不修改也可以
; U0 R3 a+ E3 X9 `
( D& K) D$ ]+ z) c' L9 {​ 好像是固定的/ j/ S0 g$ a; {

) Y) L' A- m5 B3 ?7 w2 Z5 L% P) ?1 I, t8 n--apiserver-advertise-address=192.168.1.200
3 g" @. o2 g6 B9 C) C7 l8 s: |* z​ master 节点的有效 IP 或者可以被解析的 DNS 名称,需要是 master 节点的有效网卡地址,比如 ens33, eth0 等。8 y( [. t' i+ G! D& E% w9 {8 q# \" |
, h: r/ Y' t: c! r& e) v
--ignore-preflight-errors=Swap  q, ^: r3 t0 a. }) `* s) j
​ 忽略检查 Swap 时候的报错
+ c" C: f6 \) Q2 I
& r- X2 E2 w7 a; o--control-plane-endpoint
6 v, A  s7 r6 h% p  @% ]6 \9 n; Q/ C
负载均衡的地址,支持dns解析名或者IP,添加该选项后支持高可用,如果使用dns 记得该dns一定要可以被解析& W2 C5 ]! e6 z& b3 A1 \' H
; B3 j$ X! J7 k; z' j( U' J0 j
--upload-certs) E: g; C) v5 _5 ?2 y$ T7 [$ |

: {" N' a3 u& ]8 j配合高可用使用,可以自动上传证书
% e; G& ?% A& r8 w% m, X
 楼主| 发表于 2024-9-10 17:12:32 | 显示全部楼层
vim deploy-kubeadm.yml
/ K4 ~* j8 O0 ~5 a---4 N( }2 z7 w) b! O9 R$ |3 @
- name: Deploy  kubeadm  kubelet kubectl
" O* Z3 I; H& @4 c  hosts: k8s9 D7 O; e8 t9 [( Y6 o! D
  gather_facts: no
4 H3 {* h  }! T4 o  vars:
" v( B3 ?2 P) z6 H3 O9 N    pkg_dir: /kubeadm-pkg
: M" x% W" f# W" [  n    pkg_names: ["kubelet", "kubeadm", "kubectl"]
- N1 R3 N% {/ ?& t3 h# c& N# K+ N
- D. P2 a' m( [1 X8 u1 S6 T    # 变量 download_host 需要手动设置
) Q! I) i  P  o; b8 N. P1 m5 R5 _    # 且值需要是此 playbook 目标主机中的一个" |: ~# m* X5 U7 u
    # 需要写在 inventory 文件中的名称9 N4 W  |3 O" V3 S' T
    download_host: "master"6 ^; y4 p  \2 ~  `6 C" _, x
    local_pkg_dir: "{{ playbook_dir }}/{{ download_host }}"8 E6 V; Q: b! i8 b+ t; W( D9 g# R

4 C. E/ t  ^/ q  tasks:
! T" f" Z" m% s* J& h    - name: 测试使用 -e 是否设置并覆盖了变量+ u) r+ G1 b9 w4 F) o- X( d! n) @
      debug:
" D" R5 g' F, r6 H        msg: "{{ local_pkg_dir }} {{ download_host }}"
$ R- b/ J) a( j9 }      tags:( r8 D6 g  ^2 F/ `+ `* b  M
        - deploy0 v. H& R& ?5 J' V* e" w4 w+ s
        - test# T4 G1 q' {  U4 v! r1 `- y: F
- G& E- x/ m: |
    - name: "只需要给 {{ download_host }}安装仓库文件"7 ?* F4 I  }/ i; D* @' u' m
      when: inventory_hostname == download_host8 D; T9 j* F5 d) m' V. a; I0 C
      copy:/ l$ Z/ z2 P9 c! A
        src: file/kubernetes.repo
! G% J! h, s  c# ?1 f* @! A+ p        dest: /etc/yum.repos.d/kubernetes.repo
( K" E- e$ r3 w) I      tags:
5 D' U8 h  T/ m- O" Z  N        - deploy
4 K* r- ~( a" L8 c: L7 Z! T4 m9 I# e: @1 `4 h+ n% t! N- p
    - name: 创建存放 rmp 包的目录
0 j8 l3 R+ y0 c- `- L9 }      when: inventory_hostname == download_host
$ F3 k* I! i( J9 x      file:) Y% @5 b1 M& X9 E& C6 X
        path: "{{ pkg_dir }}"; R! Y0 ~6 [4 v5 \
        state: directory0 t8 j7 b  m. n) [
      tags:( e0 u# q' j$ I
        - deploy
4 q" `: t2 a1 h/ @+ ^; V- f
. u/ @* H1 u1 {+ ~% @2 w    - name:  下载软件包# C" S+ N: t$ A
      when: inventory_hostname == download_host3 E1 h. T" ?- k1 O! `4 o) D, B
      yum:
; R. m  _) B% U1 z6 C        name: "{{ pkg_names }}"; s5 \5 Q4 {3 f* x
        download_only: yes
& j5 v$ J- j" J# z' P7 r; ]! o3 }% q" o        download_dir: "{{ pkg_dir }}"8 a& f4 Y& h; w1 I9 h* W6 Y
      tags:# `' Z* H8 \* e5 ?6 o# y
        - deploy- i' u6 i/ H7 B+ M: d! [; u) u1 q
7 v" X$ c3 _8 @- t5 h4 H
    - name: 获取下载目录 "{{ pkg_dir }}" 中的文件列表
6 k0 Q3 M4 L( Z8 m. p& H      when: inventory_hostname == download_host7 f( `! q* J: h) ^1 X
      shell: ls -1 "{{ pkg_dir }}"8 |4 c+ f- ?. ]( p, f
      register: files; ^* ~- u9 ?- \  |' s2 a
      tags:
2 h  a% H6 n- H; Z2 F        - deploy! b3 o+ m7 i! T- g5 J# Z* n7 g

, B2 n* M' G' z    - name: 把远程主机下载的软件包传输到 ansible 本地) b0 @4 _8 p( n  q2 g
      when: inventory_hostname == download_host& d0 ?' N+ t: A  n
      fetch:) s) u& L2 j% ]. u% |0 P
        src: "{{ pkg_dir }}/{{ item }}"+ M( z4 C6 ]0 ]8 K( N
        dest: ./3 k9 O$ a* l( X9 m" P. o9 u( B
      loop: "{{files.stdout_lines}}"
# x, P! Y2 c5 e- e; o      tags:0 b# R1 n5 r# j( w
        - deploy
% O  ]+ O! i. A* u; F; y' ~. I: ]+ w0 M
    - name: 传输 rpm 包到远程节点
; a: t! N: e$ d7 Q  Y5 }      when: inventory_hostname != download_host
& w7 [: B- g  t      copy:, t7 S5 ?" s" }6 k5 S* K
        src: "{{ local_pkg_dir }}{{ pkg_dir }}"0 g* N6 i/ K  Z. a! y/ p
        dest: "/"$ \) Z2 C5 d7 q: t
      tags:' T0 u7 r) {& o- X4 ^& T6 f
        - deploy- g9 i& j" ~  l4 U

8 c7 d; k% o, o( P  @    - name: 正在执行从本地安装软件包
" i# [  o" o( z; v& w, S9 M) |" N      shell:
" i( `6 {& [- Q/ ~0 n6 ^. l        cmd: yum -y localinstall *5 ^- b+ n8 p" I) Z2 r
        chdir: "{{ pkg_dir }}"
8 [* v0 Q0 q2 s, s" U$ p5 X        warn: no
3 |6 ]- h: ~  K7 R      async: 6001 V( n6 n* i" J6 J
      poll: 0
5 B: P9 ^7 i0 P3 @. T      register: yum_info
7 H' _' K7 A& Y4 N' S2 a      tags:) N4 j( T2 {3 _
        - deploy
. p: f+ W* O9 \( E. D  }
8 J# r9 ?7 u& H8 M- \0 j    - name: 打印安装结果! s4 x7 v# G$ m- Y- u1 M- y2 e" y
      debug: var=yum_info.ansible_job_id- i, F  o2 ]; Y1 I
      tags:
/ I" W4 ~& h/ q& s        - deploy9 m5 Z* E- s3 j" {; R- y
, e2 k- M, w9 l! c/ b

" R, X9 n. e0 \# 查看kubernetes依赖的镜像: I1 \, l3 h: i. _- ]' o
kubeadm config images list% ~( B' l; [3 t5 E3 `

5 i$ h2 O6 g  D' b# 不支持高可用的集群初始化
: A* j8 Y$ C( ?+ l' o) Jkubeadm init --kubernetes-version=v1.20.4 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.9.29.112 --ignore-preflight-errors=Swap: o6 ~5 T: L" k# O* L1 p

& ~9 \/ v0 D" r5 O+ j2 i9 {' j7 x# F# 支持高可用的集群初始化9 B9 B7 \; u5 q$ H7 Y# z( e
kubeadm init --kubernetes-version=v1.20.4 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=masterIP --control-plane-endpoint=kube-lab  --ignore-preflight-errors=Swap --upload-certs7 e% x% b# s5 n8 W5 b; x

2 R; o% n# W4 h6 @' d/ A
6 i, L* }5 ]# ^6 M# 初始化成功后,会有以下信息,复制后直接在node节点使用即可加入集群
# e- Y" ~- {4 J/ Z5 Zkubeadm join 10.9.29.112:6443 --token en6s67.08rnsg20dc5t8z4n \  k% j0 T$ [  Z6 o# B5 @
    --discovery-token-ca-cert-hash sha256:7d034842b9ee7a6b17d9ce7088839f4570da1c61b29922f28e72b855c10003cc
  O* W% l" h+ H$ h! U* V
; ~1 T% j3 u; {  ~. C/ s% }# 如果是高可用,还会有一条,这个使用后会添加一个master进入集群
2 F0 A. T$ w. d! `% S7 Wkubeadm join kub-lab:6443 --token s2ccws.tzb7v4olicidp032 \
: _: V9 W2 m! _4 C$ h" O    --discovery-token-ca-cert-hash sha256:29a2b437f79c5e4958c3d73e6c64fe0a4df24f0f3bcabd5ced28392d7a882e10 \
4 a( Z" N) t. t2 \4 Y    --control-plane --certificate-key c0a9a1c4a067b20dca95447f809d95c973220244c740a47f71d5302e0a759ea7- l# A- W" @- b  J9 d. X0 \/ @6 @

9 D7 P8 B" E! W- F: V2 H0 q
发表于 2024-9-14 11:01:25 | 显示全部楼层
cat > /etc/docker/daemon.json <<EOF$ K2 O; o1 w4 m$ g  N% l# g
{
& I; c2 m) N+ a" @"registry-mirrors":[( C! J4 x& _- r3 o* x" r# |
"https://docker.m.daocloud.io",
' t: u$ l! g% D: u/ o! y"https://huecher.io",
3 S/ N& M: @9 j& |"https://dockerhub.timeweb.cloud",1 a$ m1 p) F. z' ^# x( `7 ^  k# Z+ o; E
"https://noohub.ru",
: u# v% w: i2 E9 x2 U; ["https://docker.aws19527.cn"
# {6 G* |5 I$ Y# f; B]4 c( l6 ~; g; `* u! _
}
% @# ?$ M" B! T& O9 UEOF
发表于 2024-9-14 17:07:25 | 显示全部楼层
kubeadm init --apiserver-advertise-address=192.168.8.190  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.0 --service-cidr=10.177.100.0/12 --pod-network-cidr=10.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock  : d' n9 r, v  O$ \+ X7 ], S
[init] Using Kubernetes version: v1.28.0" o0 n$ |3 J, [
[preflight] Running pre-flight checks) I2 j2 I* h9 B6 H
[preflight] Pulling images required for setting up a Kubernetes cluster
( }. h# b+ C; e8 M2 E4 T[preflight] This might take a minute or two, depending on the speed of your internet connection, H% ]1 B* z8 F+ m
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
& Q7 `2 r4 v* p' E! QW0914 17:05:50.073955    7690 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
& A; }3 u6 W% I/ p0 \4 t: w9 M[certs] Using certificateDir folder "/etc/kubernetes/pki"
/ h& y& g2 K2 @[certs] Generating "ca" certificate and key5 \: z) \- Z, G% d9 ^
[certs] Generating "apiserver" certificate and key
- \9 |# Q- O& U( \" s[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.176.0.1 192.168.8.190]
5 ~  V. u% K" P9 w6 I( A8 w[certs] Generating "apiserver-kubelet-client" certificate and key
% v9 a! L% t( L$ }. M4 W3 G% J[certs] Generating "front-proxy-ca" certificate and key2 o& Z. I' t0 A4 G/ i" f
[certs] Generating "front-proxy-client" certificate and key
* X5 d/ H1 ^& r- R[certs] Generating "etcd/ca" certificate and key8 i. f: R; b" L8 `
[certs] Generating "etcd/server" certificate and key: |* d& V/ o% Z2 ^8 s
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]/ C7 m6 t2 W8 z
[certs] Generating "etcd/peer" certificate and key& l% m7 R$ P/ O2 q" O& i5 a
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]
" h" T# e# g# B+ [. h  C[certs] Generating "etcd/healthcheck-client" certificate and key
( s- O* ~$ H. }- O  t[certs] Generating "apiserver-etcd-client" certificate and key0 |7 C/ s+ m) a" [
[certs] Generating "sa" key and public key- r, y: D+ ~& f- _* y6 r8 N9 J
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"/ C2 g& l% z3 K& F0 b
[kubeconfig] Writing "admin.conf" kubeconfig file4 F! y2 z' Z# N  R" {/ D
[kubeconfig] Writing "kubelet.conf" kubeconfig file
7 A2 Y6 u* {7 Q5 W9 p5 ^[kubeconfig] Writing "controller-manager.conf" kubeconfig file
  e' M3 D: l, E/ Q" `6 q! o[kubeconfig] Writing "scheduler.conf" kubeconfig file7 C8 x9 U- U- k3 u1 W" L/ q9 A
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
0 |3 A7 s6 B9 f. V3 }# O[control-plane] Using manifest folder "/etc/kubernetes/manifests"4 z6 F; I9 f. D$ f6 q( u0 q
[control-plane] Creating static Pod manifest for "kube-apiserver"
' d. K' ?5 C" J$ s) J/ R[control-plane] Creating static Pod manifest for "kube-controller-manager"
7 C* F5 I1 Z: H; i" u! ~[control-plane] Creating static Pod manifest for "kube-scheduler"
5 W8 Q* l8 }9 E. }[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  \$ r9 N: j; e[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"/ ?# N! X/ Y9 ^; i7 _0 s
[kubelet-start] Starting the kubelet
1 z5 c9 I# k; {[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s0 f4 G9 o, A4 ]/ o0 y4 N6 D
[kubelet-check] Initial timeout of 40s passed.
* C% q! z9 y. u4 K
发表于 2024-9-15 10:54:27 | 显示全部楼层
[root@kubernetes-master net]# kubeadm init --apiserver-advertise-address=192.168.8.190  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.0 --service-cidr=10.177.100.0/12 --pod-network-cidr=10.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock  8 O+ J$ \4 T5 D7 t6 b; o% c" D1 E
[init] Using Kubernetes version: v1.28.0+ U# t0 @8 Q/ y
[preflight] Running pre-flight checks* g+ b3 b; b9 z. h7 R
[preflight] Pulling images required for setting up a Kubernetes cluster$ h, M) n$ s: {: r. y, S) X
[preflight] This might take a minute or two, depending on the speed of your internet connection
. d1 |# ]. v8 p/ {[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
3 W" x* r, v& L( V5 v9 z[certs] Using certificateDir folder "/etc/kubernetes/pki"' u7 F; I) i# P. D8 v  L( {$ T
[certs] Generating "ca" certificate and key: L3 U% h* {: p$ P2 l' l
[certs] Generating "apiserver" certificate and key' f! @: |/ L; C8 H! L' _! C" P
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.176.0.1 192.168.8.190]
9 q3 E7 g. t$ A, ~[certs] Generating "apiserver-kubelet-client" certificate and key
8 E+ o- }# t" V9 D[certs] Generating "front-proxy-ca" certificate and key/ w5 v3 x6 K% h) N
[certs] Generating "front-proxy-client" certificate and key  m  E3 f) R3 w# f5 A% k1 q. S
[certs] Generating "etcd/ca" certificate and key
0 u5 ?  ^  e/ Z% L[certs] Generating "etcd/server" certificate and key& {0 G7 O6 G8 Q; Q7 }. }5 a
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]0 S) W  e3 |- z9 J. b
[certs] Generating "etcd/peer" certificate and key/ \0 Y! `  L" G  ^1 E7 X( I# d. N2 z
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]
. g. N# Y% K. v; W. t[certs] Generating "etcd/healthcheck-client" certificate and key% e) H5 {( \$ a& y  B% {. `) G
[certs] Generating "apiserver-etcd-client" certificate and key- v2 ?0 b/ C( Q$ g* Z/ O7 s
[certs] Generating "sa" key and public key
& s- g" N, q/ d: r0 _[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
# ]6 h: E/ Y0 }" ?! F( P+ v, P[kubeconfig] Writing "admin.conf" kubeconfig file
7 w" i0 H9 p% W# i[kubeconfig] Writing "kubelet.conf" kubeconfig file* t* e; Z% M' W# W5 E$ c
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
) Y4 q* ~$ j3 S* F, a' Z[kubeconfig] Writing "scheduler.conf" kubeconfig file8 ^  K; R+ `; E' H% Z
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests") U: A% c8 @! J4 T4 [
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
" ^- g" V; c# D[control-plane] Creating static Pod manifest for "kube-apiserver"4 z3 o0 s4 o* a1 T# e6 Q
[control-plane] Creating static Pod manifest for "kube-controller-manager"
& e" Q! z+ S4 {7 V[control-plane] Creating static Pod manifest for "kube-scheduler"- |5 t4 P5 K! s1 y4 G, k& S
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"( Q% J& {/ W- K  I3 r
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"! `8 }7 ?) a0 }* j0 h
[kubelet-start] Starting the kubelet6 ?  @! Y( _1 V/ A( B
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
8 t5 `, F$ Y1 K; w& x; q" {2 Y[apiclient] All control plane components are healthy after 17.005335 seconds
* s8 y. ~( s: e! n/ t* R* x; a! o[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace7 k: X& ?5 A" v
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster9 X7 t8 s' Y, p# z  ?" ^
[upload-certs] Skipping phase. Please see --upload-certs9 }* g: D5 [' N) ]# e% N; V
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]) |8 p4 m  n3 j! M$ Y
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]: O% ~! E9 Y0 Y5 h" x2 W
[bootstrap-token] Using token: ajiqtj.xwpscuol7csse0d90 [1 D- J! J- W6 I
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
% j0 B- D% f2 A- @' f* i, h[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
4 b+ a/ p" L: l7 o( O4 ?; p- h% V[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials3 T5 k0 k% c6 m1 E3 @
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token. V) z+ Z2 d, j" H8 M9 d
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
9 z& ]2 z) V' U) q" m. [) A[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace( X8 Z" F2 Y9 ?+ k' N
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
9 \5 I) d* Z" v4 a2 C5 h[addons] Applied essential addon: CoreDNS1 `: ^6 M' I8 D5 n5 w
[addons] Applied essential addon: kube-proxy
1 P1 M. ?7 u4 m7 X# n% M
! U$ J$ Y$ I0 p+ @* f+ rYour Kubernetes control-plane has initialized successfully!
) h* s' j" v) [" `
( _2 H+ B8 ?4 ~! ITo start using your cluster, you need to run the following as a regular user:
* m4 G7 m! L' {% g9 x) b
1 `( q7 X/ P; ?1 v; F  mkdir -p $HOME/.kube
5 e3 R# C3 T6 m1 i2 B: L  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
' o& ~+ M: N* U+ U, x6 g, S  sudo chown $(id -u):$(id -g) $HOME/.kube/config
9 W) l$ D. Q3 V. Z. L% F% ^
0 P  E2 L' j, G; v& \+ F# KAlternatively, if you are the root user, you can run:
( h( o  k' y8 o/ G5 ?9 h  u4 R
5 r5 D- \5 A1 A! f  S( ~% P  export KUBECONFIG=/etc/kubernetes/admin.conf
+ u- O& l6 X. e5 B" w7 |( {2 f
  s! B! H7 [2 z' pYou should now deploy a pod network to the cluster.% ?* B" \: B, l& |  i5 ^
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
3 \/ P9 T' l2 S: q9 V  https://kubernetes.io/docs/conce ... inistration/addons/
# L* m$ L" E( ^2 }, ?& X5 o7 Z8 {$ w# w3 j% ?" u0 o; B) G
Then you can join any number of worker nodes by running the following on each as root:2 ~, m2 e) i# S1 q9 d4 F3 V9 A4 [
2 S1 U4 m, D' r( `' ~
kubeadm join 192.168.8.190:6443 --token ajiqtj.xwpscuol7csse0d9 \
# ^% o- ?) G/ z        --discovery-token-ca-cert-hash sha256:87ab51d4f77f290e00c0060990eb5efa886752e39b2e74721d96d2c41bb92699 ' u1 _8 A2 [; m
[root@kubernetes-master net]# 7 w' O* R9 A/ i" z/ ]4 x
 楼主| 发表于 2024-9-15 15:03:28 | 显示全部楼层
# 安装ipset和ipvsadm8 \8 D2 `+ b$ R2 K( f. Z
        yum install ipset ipvsadmin -y
( x1 j6 X7 y- C# y! `% M# 添加需要加载的模块写入脚本文件
# z9 O, p, _7 C6 {cat <<EOF > /etc/sysconfig/modules/ipvs.modules* C6 @' }5 w2 u7 @+ W
#!/bin/bash
9 d  A: J- ?# D6 b6 ]* Ymodprobe -- ip_vs
3 U( x! m2 {$ f' U- ^' Q) Fmodprobe -- ip_vs_rr! Q# G' w0 g3 e
modprobe -- ip_vs_wrr
& O. T4 e9 W4 D  [. ~' C4 M  d2 ~4 |modprobe -- ip_vs_sh
- z( J6 J# `+ H0 _0 Nmodprobe -- nf_conntrack_ipv4
' I- ~& \: U  \( eEOF
0 ?2 x! }, A9 O0 f. b7 A* z# 为脚本文件添加执行权限
( Z) p+ j' K/ Z        chmod +x /etc/sysconfig/modules/ipvs.modules
, @! v( W$ p% t8 H# 执行脚本文件) D4 x) X7 |; H$ ~' \
         /bin/bash /etc/sysconfig/modules/ipvs.modules! y: t2 U, y( J- S$ n4 E
# 查看对应的模块是否加载成功
3 Q1 P7 J9 `0 e7 N6 Z        lsmod | grep -e ip_vs -e nf_conntrack_ipv45 S, q: s3 M9 f
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 ( 蜀ICP备2026014127号-1 )点击这里给我发消息

GMT+8, 2026-4-9 00:00 , Processed in 0.078133 second(s), 28 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表