易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 57|回复: 9
收起左侧

部署k8s集群步骤 kubernetes实施步骤

[复制链接]
发表于 2024-9-2 15:00:03 | 显示全部楼层 |阅读模式
购买主题 本主题需向作者支付 5 金钱 才能浏览
 楼主| 发表于 2024-9-6 17:37:32 | 显示全部楼层
kubernetes的yum源) l) O0 b* w) o5 g, }9 Q
cat > /etc/yum.repos.d/kubernetes.repo <<EOF5 S* R& F/ A$ p8 K$ d  N, ?
[kubernetes]
2 m7 ^0 V) i2 R1 g9 `# ~5 V5 [name=kubernetes0 E( d: R! d0 a4 W
baseurl=http://172.24.21.35/centos/kubernetes/
! Q$ x8 v0 v+ u9 l1 A6 ugpgcheck=0) o: q  f- I; H6 X  ]" |8 a% W; a
EOF
! K2 W5 M$ ~* ~& f7 x; {7 p
 楼主| 发表于 2024-9-9 10:37:01 | 显示全部楼层
kubeadm init --apiserver-advertise-address=172.24.21.55  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.0 --service-cidr=10.177.100.0/12 --pod-network-cidr=10.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock [init] Using Kubernetes version: v1.28.0 [preflight] Running pre-flight checks         [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly         [WARNING HTTPProxy]: Connection to "https://172.24.21.55" uses proxy "http://172.24.118.199:3128". If that is not intended, adjust your proxy settings         [WARNING HTTPProxyCIDR]: connection to "10.177.100.0/12" uses proxy "http://172.24.118.199:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration         [WARNING HTTPProxyCIDR]: connection to "10.233.0.0/16" uses proxy "http://172.24.118.199:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration         [WARNING Hostname]: hostname "k8s-master" could not be reached         [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: read udp 172.24.21.55:51870->114.114.114.114:53: i/o timeout [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
 楼主| 发表于 2024-9-9 10:42:18 | 显示全部楼层
--apiserver-advertise-address   #声明监听ip地址; u# h8 o* \9 U0 t$ P( Y- p" T  R& R
--image-repository registry.aliyuncs.com/google_containers     #指定仓库3 O2 r7 Z3 A0 n
--kubernetes-version   指定k8s的版本
$ z% ], s4 y& A/ }1 r4 w7 p# ?% ]4 K. r
--service-cidr=10.177.100.0/12   #service网段$ Q, K" h5 u3 p$ R) N% ^* m
--pod-network-cidr=10.233.0.0/16    #pod网段
% z0 S4 \$ `0 i' h( z8 d--cri-socket   指定docker的中间链接软件
- w5 M  K1 x$ {( z, i6 X
 楼主| 发表于 2024-9-10 15:57:57 | 显示全部楼层
--kubernetes-version=v1.17.2
; o6 O, n/ F" v2 N' V+ V0 U8 |% e( c8 p: K
版本号,根据自己的情况更改,一般应该和 kubeadm 的版本一致
/ _5 U, m8 v$ K3 n1 m
, E1 C2 [9 G' g2 e$ J7 v. e通过如下命令获得
' L' l8 z  q9 S4 e( J$ ]! o/ H
- N# j, Y, B2 x( m0 t8 ]) T4 p, V+ {kubeadm version6 L: \6 e* S" N
4 {- {  `; q! S4 ^/ E
输出的 GitVersion:"v1.20.4" 就是版本号了! c: U  Q8 D3 J+ q- e( a9 N3 s
- u: {9 O+ Q# [8 q& O6 c* {
--pod-network-cidr=10.244.0.0/16) `2 Z: k& X5 d" ]3 t+ d0 b4 }

5 c  S) d- g$ a5 `​ pod 使用的网络,可以自定义,这个根据自己的情况修改,不修改也可以( r# N# o2 X6 h! h8 M$ ?6 x
7 |8 k9 S5 Q: d& n  l4 \2 l. P8 u
​ 好像是固定的5 Q( E/ s7 Y7 S' f
) [) F; T% q7 E  S) M; |9 m4 V0 Y
--apiserver-advertise-address=192.168.1.2005 G! L3 Q5 z# m# o+ `2 U
​ master 节点的有效 IP 或者可以被解析的 DNS 名称,需要是 master 节点的有效网卡地址,比如 ens33, eth0 等。
" R- w! l) e& ]( o7 L1 T
. N: ~! X( x% p3 j& W: j--ignore-preflight-errors=Swap
8 J) n: w8 U$ y, S​ 忽略检查 Swap 时候的报错
" f, }" g8 V  p0 ?4 a( R3 q
& y% x" V6 |1 n6 ]" R3 O6 V( E3 o2 A--control-plane-endpoint
3 ]. Q% x" d3 e. o, V- ]+ k, L5 L+ Y0 M  b# H
负载均衡的地址,支持dns解析名或者IP,添加该选项后支持高可用,如果使用dns 记得该dns一定要可以被解析2 d+ \/ S/ ~# i  x
* J  Z  j& T3 l  i0 r+ F4 [1 O
--upload-certs8 Y; _/ `$ k5 D
8 \8 y* F5 |$ t, H; P
配合高可用使用,可以自动上传证书
6 V- ?5 P* v/ c9 r- x( U2 I
 楼主| 发表于 2024-9-10 17:12:32 | 显示全部楼层
vim deploy-kubeadm.yml* \* _1 L. N2 h$ x
---1 a: N, A$ U) I2 j6 x3 W: V
- name: Deploy  kubeadm  kubelet kubectl
5 f) d: Z/ K$ L- R0 I$ Y/ T% U2 T7 {: T  hosts: k8s
4 s/ ~# @( K' D& s3 M8 c  gather_facts: no
0 B8 u7 _+ K% f6 x( R* y8 A  t  vars:) t/ D+ n' e1 {4 i
    pkg_dir: /kubeadm-pkg
7 y9 `/ ?6 g- _% ?    pkg_names: ["kubelet", "kubeadm", "kubectl"]- ?+ x# e7 r9 i
) V+ P: F- q7 k. G% [: i9 e
    # 变量 download_host 需要手动设置. D' p. @8 w% n2 w: n
    # 且值需要是此 playbook 目标主机中的一个
: `  B( ^- J2 \, Y7 z" G    # 需要写在 inventory 文件中的名称. i  W5 t9 C  C+ x& s6 M$ w
    download_host: "master") W7 [* Q# b' P% [; l% f
    local_pkg_dir: "{{ playbook_dir }}/{{ download_host }}"
, E; _- Y. E+ |5 B. [' D( N  S: t+ v% C# a
  tasks:& [4 e/ K) ?! j& H, ]: ]
    - name: 测试使用 -e 是否设置并覆盖了变量
$ {7 V, a+ L% y& x/ Z+ c      debug:
/ r, L/ k( t% R2 n/ M; _4 C( W        msg: "{{ local_pkg_dir }} {{ download_host }}", V) i0 ^8 A# z9 _0 `
      tags:% j' P2 W4 p9 }
        - deploy9 J1 m8 p$ {' g# G- u7 _
        - test
& V* p, u; i& O0 W/ s- B8 }
; P9 E. \1 P" S! H  \    - name: "只需要给 {{ download_host }}安装仓库文件"
2 f( g. y5 p- p( K- h6 d' q      when: inventory_hostname == download_host0 W4 {% Z$ A1 H: n; x! [3 [
      copy:& C, N" [% ~( \2 Q! F; ~
        src: file/kubernetes.repo
# P$ t6 V3 m& q' L2 V& y- m        dest: /etc/yum.repos.d/kubernetes.repo
5 j) Q- x9 q, i" H+ H      tags:; _. }% w; i% M( D9 m% s
        - deploy
! I2 z& l. I9 V4 A( O
* ?) q& _2 m  N: x. z& ?    - name: 创建存放 rmp 包的目录7 x7 z2 u( `$ v. |3 X  f' i
      when: inventory_hostname == download_host: J2 }( x# e, U6 p8 [+ e2 r* U
      file:' _/ e9 J; D3 I- W% ]  ?1 m. e% O( M% M
        path: "{{ pkg_dir }}"& a, U! {" A1 E% B% p8 t% O
        state: directory
& l* |# K! U+ P. Y1 ]; q3 i+ j* i      tags:! _: b2 t# g6 c+ K: w7 M
        - deploy9 o  c7 Z4 \5 k7 v5 q

' o1 K0 z+ `8 W* |$ k  \! V    - name:  下载软件包$ b( e5 t2 G& I" W, |1 t
      when: inventory_hostname == download_host
. h8 C5 V- E0 _( g) y/ k      yum:' k# D" x2 S% u% o! V. P7 n3 `
        name: "{{ pkg_names }}"8 O0 ^* K8 C1 o+ p
        download_only: yes0 u8 e6 s! u; ^
        download_dir: "{{ pkg_dir }}"6 k: k5 Y! G- W" B; u7 `
      tags:7 j: W. q) `' q2 X% \: ~% E% N
        - deploy
8 R  C( b5 O" W8 o- v* a$ e; v" G* n$ c# N0 v  z
    - name: 获取下载目录 "{{ pkg_dir }}" 中的文件列表# f# g8 \: \1 G5 M' }7 U
      when: inventory_hostname == download_host2 v$ E/ T' ?! Y  z& Q! V
      shell: ls -1 "{{ pkg_dir }}": M2 K# k/ F) M
      register: files- \/ ]( m' A3 x& \7 O1 f6 v" ^" z
      tags:/ F0 u" Z. F* l! {( f
        - deploy0 K: j' w2 t. T4 ?- v( \5 _2 j: a

( G$ u/ A/ _' `# i2 Y4 `3 j    - name: 把远程主机下载的软件包传输到 ansible 本地
0 t8 ~& O# s  u; c+ Y      when: inventory_hostname == download_host
6 B8 X6 n9 i& y      fetch:
& Y, B: t! x: J9 _. g+ ]        src: "{{ pkg_dir }}/{{ item }}"
; h4 p& y& K! i" V& T$ c2 w8 H        dest: ./8 A. L3 ~6 n. q# q0 Y7 x
      loop: "{{files.stdout_lines}}"$ A9 N1 H3 D: `3 i
      tags:( [# M2 T8 B6 }  Y. L
        - deploy
, T7 ]* k5 Q$ Z* u5 i8 y  P. b" P
    - name: 传输 rpm 包到远程节点
! Y: `# K& k" }      when: inventory_hostname != download_host
' x* C8 P+ d; M      copy:
. U) F4 n2 p0 r) p        src: "{{ local_pkg_dir }}{{ pkg_dir }}"
; `' W$ C3 L# Z, m        dest: "/"
) h; j8 h! W8 A4 n$ F      tags:& S# ~- S! C* n' ^. M4 t
        - deploy! ?& S4 z+ A' Z# j3 n! w

* X& Y  O6 t8 H/ h/ \$ k7 o    - name: 正在执行从本地安装软件包
. y. v; d. l. C7 `0 P. B$ R! G      shell:- h" s! ~" d6 h3 d
        cmd: yum -y localinstall *' M  V& e7 a0 S# ~
        chdir: "{{ pkg_dir }}") ~4 u% ^3 Y8 \1 w" J1 P
        warn: no
' k( ^9 z) x) F1 M% E. A0 ~/ g      async: 600# E4 d/ P* |0 N* U) c
      poll: 09 b  L. S4 g/ q
      register: yum_info
- E- K& I' n! H# I& ?9 g      tags:7 Q5 r3 F+ y# R- v
        - deploy  D8 F5 D  ?9 [6 x
, M0 a: o( k4 Q5 _
    - name: 打印安装结果
( d$ u/ K/ r2 k& h- ~1 q      debug: var=yum_info.ansible_job_id# J  k: c" Y& Q+ R) i+ t
      tags:
" w: k3 I# f( d- J* A' A        - deploy. \( k1 R7 B  O/ n( P

4 d% z/ h# k9 P0 }$ s7 p4 @: N
% O  U5 E- m! h# 查看kubernetes依赖的镜像$ k: d# H7 v# {1 m
kubeadm config images list
9 a6 }8 @3 o' Y0 J/ q0 j
  g) n/ R% a: E7 p/ r/ ^; z4 Q$ W# 不支持高可用的集群初始化6 e1 r/ N7 S4 S! t
kubeadm init --kubernetes-version=v1.20.4 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.9.29.112 --ignore-preflight-errors=Swap
6 I# k! U& m3 t  t. A2 A* u
0 E# |4 ?( v, [5 V  N  U0 f# 支持高可用的集群初始化# ?& b( ~; {( l9 F
kubeadm init --kubernetes-version=v1.20.4 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=masterIP --control-plane-endpoint=kube-lab  --ignore-preflight-errors=Swap --upload-certs
; @, L- f) L. a
- h/ E: @1 \' P; r3 u, {+ B" }4 k. ], _! B9 @' W. e
# 初始化成功后,会有以下信息,复制后直接在node节点使用即可加入集群  |& K7 T4 ?/ t/ _1 j' ~) ^. R
kubeadm join 10.9.29.112:6443 --token en6s67.08rnsg20dc5t8z4n \) Z- Z0 E) n* V; b- P' c
    --discovery-token-ca-cert-hash sha256:7d034842b9ee7a6b17d9ce7088839f4570da1c61b29922f28e72b855c10003cc
! Q* c1 G- q5 T$ W8 z+ p* z* d& Q. }  G  v# i9 z+ q7 f3 c
# 如果是高可用,还会有一条,这个使用后会添加一个master进入集群# m8 ~) r" C& H' S5 M+ J
kubeadm join kub-lab:6443 --token s2ccws.tzb7v4olicidp032 \
3 e7 c; Z4 c/ g    --discovery-token-ca-cert-hash sha256:29a2b437f79c5e4958c3d73e6c64fe0a4df24f0f3bcabd5ced28392d7a882e10 \
. h$ Z" I7 _2 n! I$ q) b) w    --control-plane --certificate-key c0a9a1c4a067b20dca95447f809d95c973220244c740a47f71d5302e0a759ea75 R2 q) |% R9 @5 K) z9 j1 k

! k* g1 l; s" N6 q! _. l
发表于 2024-9-14 11:01:25 | 显示全部楼层
cat > /etc/docker/daemon.json <<EOF
) y7 C2 Y; n# z$ e5 X{5 X- E" Q: |* \: n2 i( q6 \
"registry-mirrors":[: q& y! ?% \; q* h: L( h
"https://docker.m.daocloud.io",
2 f. y' D/ B" D"https://huecher.io",
6 d7 N  p$ G9 j# s7 s"https://dockerhub.timeweb.cloud",; ~: H1 \1 H% F4 ?
"https://noohub.ru",
# B1 V( G1 V3 V3 a" k" |( l"https://docker.aws19527.cn"
' {2 f& U9 V' k  {; e]6 r9 m2 m9 w( [
}
3 _7 Z* }. G* q& \2 P, b6 H$ ?EOF
发表于 2024-9-14 17:07:25 | 显示全部楼层
kubeadm init --apiserver-advertise-address=192.168.8.190  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.0 --service-cidr=10.177.100.0/12 --pod-network-cidr=10.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock  ; k+ S2 o/ [$ h/ k
[init] Using Kubernetes version: v1.28.0
0 S3 H: L( C3 G1 F[preflight] Running pre-flight checks
* Y4 V: A! K; P& N[preflight] Pulling images required for setting up a Kubernetes cluster
" x+ |( ]- e) F) U[preflight] This might take a minute or two, depending on the speed of your internet connection
# t& y9 s7 W6 j& F/ }8 C[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
, W; w: v' ]7 U/ `9 t( \W0914 17:05:50.073955    7690 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.* @! @8 s" j" P5 C8 B: _$ I+ {
[certs] Using certificateDir folder "/etc/kubernetes/pki"
5 j5 w! L! o6 l: c9 d* j0 N[certs] Generating "ca" certificate and key7 L& i% k# s; m0 y# G
[certs] Generating "apiserver" certificate and key  t  G' {7 r' R/ l' u- ?1 ^6 q
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.176.0.1 192.168.8.190], b6 x4 v/ g& i5 F. I& f! @5 z
[certs] Generating "apiserver-kubelet-client" certificate and key, ?5 S! {. g9 f) i* P
[certs] Generating "front-proxy-ca" certificate and key: D+ n; a2 ~/ f' [9 r" C
[certs] Generating "front-proxy-client" certificate and key7 f$ D( U7 U; o" h" T5 {" u1 M7 ]
[certs] Generating "etcd/ca" certificate and key7 ?5 B% b& G1 v( r# E' L$ n# }7 d
[certs] Generating "etcd/server" certificate and key$ p9 [( L1 |6 q" M) ]" L6 }
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]7 c. Y8 b: p7 |' ?1 w
[certs] Generating "etcd/peer" certificate and key
8 T& _( h! l; j& O4 q" X[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]6 u0 T5 q' p% p
[certs] Generating "etcd/healthcheck-client" certificate and key' s- Q. x4 X9 M# V# X
[certs] Generating "apiserver-etcd-client" certificate and key
! m) p5 _: ?, a7 J7 H2 ?7 U6 {7 V[certs] Generating "sa" key and public key
( H4 ~3 E: v9 I5 c( X& F[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
" g5 B* K* V6 M[kubeconfig] Writing "admin.conf" kubeconfig file
$ k) Y% w+ J4 D  m2 B3 @9 S$ z[kubeconfig] Writing "kubelet.conf" kubeconfig file
0 Z8 m4 X0 h: L, h6 p8 f[kubeconfig] Writing "controller-manager.conf" kubeconfig file
2 A$ x" [7 d# x* Q8 F. C, O[kubeconfig] Writing "scheduler.conf" kubeconfig file
% n6 c2 f( Q/ r1 o& l[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"- P0 _8 N4 {) d# I) _, u  K; @
[control-plane] Using manifest folder "/etc/kubernetes/manifests": u; [- |" V' c* `
[control-plane] Creating static Pod manifest for "kube-apiserver"8 j/ `4 B: v( P) S$ r
[control-plane] Creating static Pod manifest for "kube-controller-manager"
* V. G. S* q: R8 G' E9 J# C, Z[control-plane] Creating static Pod manifest for "kube-scheduler"4 S6 M: ]- E6 ?- d
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"; M# H. \' K* W* e8 S
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"5 W6 u. v, Q5 W' G2 @4 w
[kubelet-start] Starting the kubelet- W$ S5 A! G" F* i
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
; y# _! n) C/ j, I2 p, s2 T& t' j[kubelet-check] Initial timeout of 40s passed.
* \+ V! w' v# k  O  {, t# T  p
发表于 2024-9-15 10:54:27 | 显示全部楼层
[root@kubernetes-master net]# kubeadm init --apiserver-advertise-address=192.168.8.190  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.28.0 --service-cidr=10.177.100.0/12 --pod-network-cidr=10.233.0.0/16  --cri-socket=unix:///var/run/cri-dockerd.sock  
% m6 m# F' N0 ]% Y/ s7 q[init] Using Kubernetes version: v1.28.0
1 y# x2 W% P6 K8 d1 @# Z2 r[preflight] Running pre-flight checks. i; _# c7 u, H3 R( `
[preflight] Pulling images required for setting up a Kubernetes cluster- }" G' f; _" d! l
[preflight] This might take a minute or two, depending on the speed of your internet connection; o  D$ H. ]  e: d. L
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
4 o. [8 b$ @5 S' W6 c[certs] Using certificateDir folder "/etc/kubernetes/pki"5 |0 {8 r' h2 l' W7 }- B  p
[certs] Generating "ca" certificate and key, [6 z& ?( Z5 K& v9 z
[certs] Generating "apiserver" certificate and key5 u5 c# n* [* o  O9 n& q9 e
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.176.0.1 192.168.8.190]
$ z/ Q3 `- V* m8 ^. s' H2 T! H[certs] Generating "apiserver-kubelet-client" certificate and key% h! M7 K* M! ]1 B8 b6 h" A2 S
[certs] Generating "front-proxy-ca" certificate and key
% y% r! t6 l6 t& ?7 `# B* }- O[certs] Generating "front-proxy-client" certificate and key
4 {2 l2 M- ~/ Q5 F; {9 j0 L[certs] Generating "etcd/ca" certificate and key* ~* R& R/ i  k( n+ X
[certs] Generating "etcd/server" certificate and key
* |9 l( |" E: i5 k7 Y( J; V[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]
1 U+ R6 `* F/ T+ |) b. N[certs] Generating "etcd/peer" certificate and key2 k! ?) A7 j8 l" [1 Y7 G5 n7 f
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.8.190 127.0.0.1 ::1]# j$ I  q- i# ~( v4 B, x* `8 \
[certs] Generating "etcd/healthcheck-client" certificate and key6 U% r8 _" {; p% R! O
[certs] Generating "apiserver-etcd-client" certificate and key' N+ |7 @; P4 @# Y! Z& J
[certs] Generating "sa" key and public key
9 h0 V2 h( k- J. H. |( ^: y& u[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
. w5 ^& Z) |, {7 Y- Z/ B, E' m[kubeconfig] Writing "admin.conf" kubeconfig file
( H) v# a. l" n9 ]  d# D[kubeconfig] Writing "kubelet.conf" kubeconfig file
% S; [9 [3 B* r4 ~( K[kubeconfig] Writing "controller-manager.conf" kubeconfig file: x, F/ E* q  Y9 g
[kubeconfig] Writing "scheduler.conf" kubeconfig file* u  `3 [( F9 z( G; v
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
$ b$ l, g4 z9 Z  W" ~  E[control-plane] Using manifest folder "/etc/kubernetes/manifests"
4 u9 v5 X2 r8 _/ e[control-plane] Creating static Pod manifest for "kube-apiserver"
- m3 N3 t2 [# F[control-plane] Creating static Pod manifest for "kube-controller-manager"& ~. K" k, S( ~2 b( c/ P; Q2 J6 |- }
[control-plane] Creating static Pod manifest for "kube-scheduler"
( P; g! t5 {1 n7 G[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
9 v8 u- h) g8 X2 A, `[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
/ P( u! V: E9 Y! e, q, l9 T* R[kubelet-start] Starting the kubelet
' N) r& E( k' w4 m[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
# U9 T3 h) d8 J+ u6 U, q[apiclient] All control plane components are healthy after 17.005335 seconds
5 }4 t* I* T  m) x3 j7 L* U[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace' B. x8 L" o9 ^0 W- x) `* E
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
, R" N, h1 X* Q! M+ ^2 Q. }6 ~[upload-certs] Skipping phase. Please see --upload-certs
: L( f# Z' `, E4 n4 z9 b4 k[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]2 T% i9 g9 B" I0 S
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
: x+ U3 e1 T' J5 G[bootstrap-token] Using token: ajiqtj.xwpscuol7csse0d9
! e; x/ p8 h7 u1 |2 D) Z[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles" b+ l$ {/ R) O8 l7 u
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes0 P/ [" |; H; C0 u% Z9 W7 q! j: [: o
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials' }' ?* k7 d- O  {3 ]
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token4 \$ K5 j4 G% q  x$ \$ \! R/ q
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster! {; q/ J# u" {) t7 G# T  H
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
3 y. q( ~8 J2 M[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
, G( S( t. `6 A! J6 D! l; P4 Q[addons] Applied essential addon: CoreDNS
" L7 U1 y% ~/ o% N) {( R- A" _( t0 R[addons] Applied essential addon: kube-proxy
' u/ r" B) d, h. C; u6 d5 C3 [, s/ g3 E1 d' U. l9 S
Your Kubernetes control-plane has initialized successfully!
% v$ w0 u2 Q, n! f& F- J5 `- y, P4 C" m  `- @7 q3 q9 L( v
To start using your cluster, you need to run the following as a regular user:
8 \( `8 k% o5 _6 {3 }! Q$ C0 H/ D* _9 Z% X2 }
  mkdir -p $HOME/.kube3 L( O8 W! t* ^  ~3 A
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config" `* I- D/ R* P: g
  sudo chown $(id -u):$(id -g) $HOME/.kube/config, e7 w; g+ ]4 n, y  h3 ?8 ^9 X! T
( v+ U# S3 F1 c0 `
Alternatively, if you are the root user, you can run:! w" j+ p- E* u& V0 r- I- N
( T: e4 L' h" L* a: {
  export KUBECONFIG=/etc/kubernetes/admin.conf2 h- c; t+ ?1 U: ]! g5 G
  Z( s$ k, x4 L" ^1 ~3 J2 i; v/ n
You should now deploy a pod network to the cluster.$ F: v" N! l/ o# G1 D) j
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
5 o3 k; F$ A7 L  V  https://kubernetes.io/docs/conce ... inistration/addons/
2 y$ ~6 O" y  e8 s& e
1 C. O7 }1 l, }- N6 U5 sThen you can join any number of worker nodes by running the following on each as root:
* ^( R! \# s7 d
2 l0 Y' U7 ^6 h0 U% o6 T4 Ikubeadm join 192.168.8.190:6443 --token ajiqtj.xwpscuol7csse0d9 \
' P: [( D/ l+ D        --discovery-token-ca-cert-hash sha256:87ab51d4f77f290e00c0060990eb5efa886752e39b2e74721d96d2c41bb92699 * [0 Q7 m0 V# G+ t; B% p* Q9 t- u
[root@kubernetes-master net]# ! V( u% W$ a* b& w6 \' I
 楼主| 发表于 2024-9-15 15:03:28 | 显示全部楼层
# 安装ipset和ipvsadm
4 h. ^8 l8 t$ R  R, y( R  F8 [  Y        yum install ipset ipvsadmin -y
6 {4 P! {% d: l3 F$ J4 J  u1 V% ^# 添加需要加载的模块写入脚本文件7 {$ {' a! Q# N
cat <<EOF > /etc/sysconfig/modules/ipvs.modules6 Y- a* o7 X" H& B8 o3 M
#!/bin/bash
3 H" \0 M: h8 a$ m7 g$ imodprobe -- ip_vs
2 N' u: }) p2 Q5 _2 D, S7 gmodprobe -- ip_vs_rr
% q1 Q$ p. l9 r+ V* o; X( Smodprobe -- ip_vs_wrr  M& Q. d: @& }! M' l
modprobe -- ip_vs_sh
+ a1 |0 s2 f, \  _) o9 W' rmodprobe -- nf_conntrack_ipv4
$ T" n4 t% y9 w3 X: R' sEOF
+ m$ m; i) Z* C! _9 [$ x# 为脚本文件添加执行权限6 S9 G2 H9 k+ t
        chmod +x /etc/sysconfig/modules/ipvs.modules
6 S" F% k: v. s, e6 V+ a# 执行脚本文件; p/ n* o- |$ H  c3 G1 p- k/ _0 ]
         /bin/bash /etc/sysconfig/modules/ipvs.modules
$ z% A; y5 M3 G: |$ g' N+ |" n/ ?# 查看对应的模块是否加载成功, e! j. W, w2 Z: |
        lsmod | grep -e ip_vs -e nf_conntrack_ipv4/ ~  }3 p" B1 E: @1 V$ ?
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 ( 蜀ICP备2026014127号-1 )点击这里给我发消息

GMT+8, 2026-4-8 21:27 , Processed in 0.084677 second(s), 30 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表