将设为首页浏览此站
开启辅助访问 天气与日历 收藏本站联系我们切换到窄版

易陆发现论坛

 找回密码
 开始注册
查看: 287|回复: 2
收起左侧

手动方式部署ceph集群以及添加osd

[复制链接]
发表于 2022-7-19 11:20:00 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有帐号?开始注册

x
一、准备
前一篇点击打开链接只部署了一个单机集群。在这一篇里,手动部署一个多机集群:mycluster。我们有三台机器nod1,node2和node3;其中node1可以免密ssh/scp任意其他两台机器。我们的所有工作都在node1上完成。
准备工作包括在各个机器上安装ceph rpm包(见前一篇第1节点击打开链接),并在各个机器上修改下列文件:
/usr/lib/systemd/system/ceph-mon@.service /usr/lib/systemd/system/ceph-osd@.service /usr/lib/systemd/system/ceph-mds@.service /usr/lib/systemd/system/ceph-mgr@.service /usr/lib/systemd/system/ceph-radosgw@.service
修改:
Environment=CLUSTER=ceph <--- 改成CLUSTER=mycluster ExecStart=/usr/bin/... --id %i --setuser ceph --setgroup ceph <--- 删掉--setuser ceph --setgroup ceph二、创建工作目录
在node1创建一个工作目录,后续所有工作都在node1上的这个工作目录中完成;
mkdir /tmp/mk-ceph-cluster cd /tmp/mk-ceph-cluster三、创建配置文件vim mycluster.conf [global] cluster = mycluster fsid = 116d4de8-fd14-491f-811f-c1bdd8fac141 public network = 192.168.100.0/24 cluster network = 192.168.73.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx osd pool default size = 3 osd pool default min size = 2 osd pool default pg num = 128 osd pool default pgp num = 128 osd pool default crush rule = 0 osd crush chooseleaf type = 1 admin socket = /var/run/ceph/$cluster-$name.asock pid file = /var/run/ceph/$cluster-$name.pid log file = /var/log/ceph/$cluster-$name.log log to syslog = false max open files = 131072 ms bind ipv6 = false [mon] mon initial members = node1,node2,node3 mon host = 192.168.100.131:6789,192.168.100.132:6789,192.168.100.133:6789 ;Yuanguo: the default value of {mon data} is /var/lib/ceph/mon/$cluster-$id, ; we overwrite it. mon data = /var/lib/ceph/mon/$cluster-$name mon clock drift allowed = 10 mon clock drift warn backoff = 30 mon osd full ratio = .95 mon osd nearfull ratio = .85 mon osd down out interval = 600 mon osd report timeout = 300 debug ms = 20 debug mon = 20 debug paxos = 20 debug auth = 20 [mon.node1] host = node1 mon addr = 192.168.100.131:6789 [mon.node2] host = node2 mon addr = 192.168.100.132:6789 [mon.node3] host = node3 mon addr = 192.168.100.133:6789 [mgr] ;Yuanguo: the default value of {mgr data} is /var/lib/ceph/mgr/$cluster-$id, ; we overwrite it. mgr data = /var/lib/ceph/mgr/$cluster-$name [osd] ;Yuanguo: we wish to overwrite {osd data}, but it seems that 'ceph-disk' forces ; to use the default value, so keep the default now; maybe in later versions ; of ceph the limitation will be eliminated. osd data = /var/lib/ceph/osd/$cluster-$id osd recovery max active = 3 osd max backfills = 5 osd max scrubs = 2 osd mkfs type = xfs osd mkfs options xfs = -f -i size=1024 osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog filestore max sync interval = 5 osd op threads = 2 debug ms = 100 debug osd = 100
需要说明的是,在这个配置文件中,我们覆盖了一些默认值,比如:{mon data}和{mgr data},但是没有覆盖{osd data},因为ceph-disk貌似强制使用默认值。另外,pid, sock文件被放置在/var/run/ceph/中,以$cluster-$name命名;log文件放置在/var/log/ceph/中,也是以$cluster-$name命名。这些都可以覆盖。
四、生成keyring
在单机部署中点击打开链接,我们说过,有两种操作集群中user及其权限的方式,这里我们使用第一种:先生成keyring文件,然后在创建集群时带入使之生效。
ceph-authtool --create-keyring mycluster.keyring --gen-key -n mon. --cap mon 'allow *'
/ ?! T0 I7 M: v6 {7 D+ q/ Hceph-authtool --create-keyring mycluster.client.admin.keyring --gen-key -n client.admin --set-uallow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
' f2 R8 G" b* L% }. c2 {3 `ceph-authtool --create-keyring mycluster.client.bootstrap-osd.keyring --gen-key -n client.bootstrap-osd --cap mon 'allow profile bootstrap-osd'
. g% k9 c3 `! V' H( p6 v$ D2 h( Lceph-authtool --create-keyring mycluster.mgr.node1.keyring --gen-key -n mgr.node1 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
$ B, D" X) p% Z& a1 {" _( y& [ceph-authtool --create-keyring mycluster.mgr.node2.keyring --gen-key -n mgr.node2 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' : {$ \) h' P( J: a! X
ceph-authtool --create-keyring mycluster.mgr.node3.keyring --gen-key -n mgr.node3 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
, L2 q! s' d/ _6 e8 `# Hceph-authtool mycluster.keyring --import-keyring mycluster.client.admin.keyring ) P' k+ Y, n1 Q+ W# g, {/ v1 \
ceph-authtool mycluster.keyring --import-keyring mycluster.client.bootstrap-osd.keyring ( |. ]* J0 u" L( J/ x
ceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node1.keyring
/ i: u+ t! V* x: r# fceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node2.keyring + C7 N  ?0 @- _1 M+ }/ j
ceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node3.keyring  \# N( B& B' @% ?5 _3 j. J
cat mycluster.keyring [mon.] key = AQA525NZsY73ERAAIM1J6wSxglBNma3XAdEcVg== caps mon = "allow *" [client.admin] key = AQBJ25NZznIpEBAAlCdCy+OyUIvxtNq+1DSLqg== auid = 0 caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *" [client.bootstrap-osd] key = AQBW25NZtl/RBxAACGWafYy1gPWEmx9geCLi6w== caps mon = "allow profile bootstrap-osd" [mgr.node1] key = AQBb25NZ1mIeFhAA/PmRHFY6OgnAMXL1/8pSxw== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *" [mgr.node2] key = AQBg25NZJ6jyHxAAf2GfBAG5tuNwf9YjkhhEWA== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *" [mgr.node3] key = AQBl25NZ7h6CJRAAaFiea7hiTrQNVoZysA7n/g== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *"五、生成monmap
生成monmap并添加3个monitor
monmaptool --create --add node1 192.168.100.131:6789 --add node2 192.168.100.132:6789 --add node3 192.168.100.133:6789 --fsid 116d4de8-fd14-491f-811f-c1bdd8fac141
0 k' N' [% o" s7 I. Q5 qmonmap [plain] view plain copy monmaptool --print monmap monmaptool: monmap file monmap epoch 0 fsid 116d4de8-fd14-491f-811f-c1bdd8fac141 last_changed 2017-08-16 05:45:37.851899 created 2017-08-16 05:45:37.851899 0: 192.168.100.131:6789/0 mon.node1 1: 192.168.100.132:6789/0 mon.node2 2: 192.168.100.133:6789/0 mon.node3六、分发配置文件
keyring和monmap
把第2、3和4步中生成的配置文件,keyring,monmap分发到各个机器。由于mycluster.mgr.nodeX.keyring暂时使用不到,先不分发它们(见第8节)。
cp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap /etc/ceph scp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap node2:/etc/ceph scp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap node3:/etc/ceph七、创建集群1、创建{mon data}目录mkdir /var/lib/ceph/mon/mycluster-mon.node1
' C. w; Q( y& B* h& assh node2 mkdir /var/lib/ceph/mon/mycluster-mon.node2
, s/ f( n% k# O7 l2 ^) a& |' jssh node3 mkdir /var/lib/ceph/mon/mycluster-mon.node3
注意,在配置文件mycluster.conf中,我们把{mon data}设置为/var/lib/ceph/mon/$cluster-$name,而不是默认的/var/lib/ceph/mon/$cluster-$id;
+ B0 n6 B9 _7 e$cluster-$name展开为mycluster-mon.node1(23);
/ x. N5 M0 [. v6 i默认的$cluster-$id展开为mycluster-node1(23);
2、初始化monitorceph-mon --cluster mycluster --mkfs -i node1 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring ) _- u0 V+ \" p6 A1 L  G
ssh node2 ceph-mon --cluster mycluster --mkfs -i node2 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring ) L) P( ?$ L# x# _# c9 v
ssh node3 ceph-mon --cluster mycluster --mkfs -i node3 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring( t) U7 P: z9 x: [
注意,在配置文件mycluster.conf,我们把{mon data}设置为/var/lib/ceph/mon/$cluster-$name,展开为/var/lib/ceph/mon/mycluster-mon.node1(23)。ceph-mon会
, f; r7 p1 x- }! E) h7 j  `( d根据–cluster mycluster找到配置文件mycluster.conf,并解析出{mon data},然后在那个目录下进行初始化。
3、touch donetouch /var/lib/ceph/mon/mycluster-mon.node1/done % X3 U; E, J9 v* A+ K1 F* K/ e
ssh node2 touch /var/lib/ceph/mon/mycluster-mon.node2/done
  p1 O3 \. u& M- g% _5 ?ssh node3 touch /var/lib/ceph/mon/mycluster-mon.node3/done4、启动monitorssystemctl start ceph-mon@node1
$ D9 ]! m; U4 V- \$ l) z3 | ssh node2 systemctl start [url=mailto:ceph-mon@node2]ceph-mon@node2

! P: l( R& w$ M  Z0 {- z* bssh node3 systemctl start ceph-mon@node3[/url]. u1 @# _+ E+ o
5、检查机器状态ceph --cluster mycluster -s
7 p* S0 v) T) v) [+ w& ycluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs:八、添加osd
每台集群都有一个/dev/sdb,我们把它们作为osd。
1、删除它们的分区2、prepareceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdb ssh node2 ceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdb ssh node3 ceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 /dev/sdb 注意:prepare node3:/dev/sdb时,我们没有加选项:--bluestore --block.db /dev/sdb --block.wal /dev/sdb;后面我们会看它和其他两个有什么不同。3、activateceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring ssh node2 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring ssh node3 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring
注意:ceph-disk好像有两个问题:
  • 前面说过,它不使用自定义的{osd data},而强制使用默认值 /var/lib/ceph/osd/$cluster-$id
    3 c, H* S# U+ I: M
  • 好像不能为一个磁盘指定osd id,而只能依赖它自动生成。虽然ceph-disk prepare有一个选项–osd-id,但是ceph-disk activate并不使用它而是自己生成。当不匹配时,会出现 如下错误:
    ! l* Q, e( n3 l5 D* X5 J+ Y
# ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring command_with_stdin: Error EEXIST: entity osd.0 exists but key does not match mount_activate: Failed to activate '['ceph', '--cluster', 'mycluster', '--name', 'client.bootstrap-osd', '--keyring', '/etc/ceph/mycluster.client.bootstrap-osd.keyring', '-i', '-', 'osd', 'new', u'ca8aac6a-b442-4b07-8fa6-62ac93b7cd29']' failed with status code 17
从 ‘-i’, ‘-‘可以看出,它只能自动生成osd id;
4、检查osd
在ceph-disk prepare时,node1:/dev/sdb和node2:/dev/sdb一样,都有–bluestore –block.db /dev/sdb –block.wal选项;node3:/dev/sdb不同,没有加这些选项。我们看看有什么不同。
4.1 node1
mount | grep sdb /dev/sdb1 on /var/lib/ceph/osd/mycluster-0 type xfs (rw,noatime,seclabel,attr2,inode64,noquota) ls /var/lib/ceph/osd/mycluster-0/ activate.monmap block block.db_uuid block.wal bluefs fsid kv_backend mkfs_done systemd whoami active block.db block_uuid block.wal_uuid ceph_fsid keyring magic ready type ls -l /var/lib/ceph/osd/mycluster-0/block lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block -> /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 ls -l /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 -> ../../sdb2 blkid /dev/sdb2 /dev/sdb2: PARTLABEL="ceph block" PARTUU cat /var/lib/ceph/osd/mycluster-0/block_uuid a12dd642-b64c-4fef-b9e6-0b45cff40fa9 ls -l /var/lib/ceph/osd/mycluster-0/block.db lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block.db -> /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 ls -l /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 -> ../../sdb3 blkid /dev/sdb3 /dev/sdb3: PARTLABEL="ceph block.db" PARTUU cat /var/lib/ceph/osd/mycluster-0/block.db_uuid 1c107775-45e6-4b79-8a2f-1592f5cb03f2 ls -l /var/lib/ceph/osd/mycluster-0/block.wal lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block.wal -> /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f ls -l /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f -> ../../sdb4 blkid /dev/sdb4 /dev/sdb4: PARTLABEL="ceph block.wal" PARTUU cat /var/lib/ceph/osd/mycluster-0/block.wal_uuid 76055101-b892-4da9-b80a-c1920f24183f
可见,node1(node2)上,/dev/sdb被分为4个分区:
  • /dev/sdb1: metadata
  • /dev/sdb2:the main block device
  • /dev/sdb3: db
  • /dev/sdb4: wal' u% N4 r' G* c8 \/ O( u/ X( ?5 [7 \' t
具体见:ceph-disk prepare –help
4.2 node3
mount | grep sdb /dev/sdb1 on /var/lib/ceph/osd/mycluster-2 type xfs (rw,noatime,seclabel,attr2,inode64,noquota) ls /var/lib/ceph/osd/mycluster-2 activate.monmap active block block_uuid bluefs ceph_fsid fsid keyring kv_backend magic mkfs_done ready systemd type whoami ls -l /var/lib/ceph/osd/mycluster-2/block lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:54 /var/lib/ceph/osd/mycluster-2/block -> /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb ls -l /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb lrwxrwxrwx. 1 root root 10 Aug 16 05:56 /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb -> ../../sdb2 blkid /dev/sdb2 /dev/sdb2: PARTLABEL="ceph block" PARTUU cat /var/lib/ceph/osd/mycluster-2/block_uuid 0a70b661-43f5-4562-83e0-cbe6bdbd31fb
可见,在node3上,/dev/sdb被分为2个分区:
  • /dev/sdb1:metadata
  • /dev/sdb2:the main block device;db和wal也在这个分区上。
    & }% q( H% c  z! z8 T
具体见:ceph-disk prepare –help
5、检查集群状态ceph --cluster mycluster -s cluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum node1,node2,node3 mgr: no daemons active osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs:
由于没有添加mgr,集群处于WARN状态。
九、添加mgr1、创建{mgr data}目录mkdir /var/lib/ceph/mgr/mycluster-mgr.node1 ssh node2 mkdir /var/lib/ceph/mgr/mycluster-mgr.node2 ssh node3 mkdir /var/lib/ceph/mgr/mycluster-mgr.node3
注意,和{mon data}类似,在配置文件mycluster.conf中,我们把{mgr data}设置为/var/lib/ceph/mgr/$cluster-$name,而不是默认的/var/lib/ceph/mgr/$cluster-$id。
2、分发mgr的keyringcp mycluster.mgr.node1.keyring /var/lib/ceph/mgr/mycluster-mgr.node1/keyring scp mycluster.mgr.node2.keyring node2:/var/lib/ceph/mgr/mycluster-mgr.node2/keyring scp mycluster.mgr.node3.keyring node3:/var/lib/ceph/mgr/mycluster-mgr.node3/keyring3、启动mgrsystemctl start ceph-mgr@node1 ssh node2 systemctl start ceph-mgr@node2 ssh node3 systemctl start ceph-mgr@node34、检查集群状态ceph --cluster mycluster -s cluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active), standbys: node3, node2 osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 5158 MB used, 113 GB / 118 GB avail pgs:
可见,添加mgr之后,集群处于OK状态。
 楼主| 发表于 2022-7-20 13:41:38 | 显示全部楼层
部署Ceph mon服务
) ?* V6 Q: n+ U, v* e, {. u! H0 D安装Ceph-mon服务程序(所有设备执行)& C& q" Y$ p' f5 d
yum install -y ceph-mon
% b3 X& y0 h7 x6 G1 H! d9 I2 @: v10 n% m- k1 X4 b& V/ d" n9 H
初始化Mon服务(Ceph 01执行)9 z, s  G& x: d- g3 j
生成uuid% x: U  ?, X1 X0 J3 }0 m
uuidgen( S$ s5 H9 v  N# G$ G
> 9bf24809-220b-4910-b384-c1f06ea80728$ I* l! j: R; g9 n: ?
15 t8 b3 a# D. K& e/ }0 D' P: U+ l; s
2
) f) j, Z" J' p6 ^. T7 F/ \5 r创建Ceph配置文件
+ ~. }! p+ t( s$ X4 q! zcat >> /etc/ceph/ceph.conf <<EOF, r; l' _& @( i, [/ I. [" ?# F! U
[global]% V  c5 C# g) ]) s9 k9 Y, v9 H2 A
fsid = 9bf24809-220b-4910-b384-c1f06ea80728
/ G& O4 v% O/ g& B, u6 K2 @8 W2 kmon_initial_members = ceph01,ceph02,ceph03* \1 n% Z  j3 m! N2 M* W
mon_host = 10.40.65.156,10.40.65.175,10.40.65.129( z& a; t- e9 @4 N% a
public_network = 10.40.65.0/24* K; o9 O+ {: ?4 {4 Q  Q* s
auth_cluster_required = cephx
( v" V' n# k! U5 K$ d/ K! oauth_service_required = cephx& n9 j' |3 Z0 A; G8 B2 N
auth_client_required = cephx1 \3 y- b; `$ E- Y
osd_journal_size = 1024
' p. l/ ]- f- u' t9 i' Mosd_pool_default_size = 3
: y2 E, |) u" Z; b, f' posd_pool_default_min_size = 2
; F& j9 @: d7 x$ z% f- Xosd_pool_default_pg_num = 64
) V2 T) t0 m. _osd_pool_default_pgp_num = 64& V7 F+ |/ H# v  X, [5 p
osd_crush_chooseleaf_type = 1( J5 P, c( b2 I* W/ ]$ W: I  V: ~$ L
EOF) [1 c3 i1 `. _* _% J, x" M1 \
1
! t1 A$ i  w# ?$ E3 t+ F6 u7 G2
- ?0 H3 a3 U7 z: k1 S. x3
! K5 N- M1 I9 ]. N; o8 @4" I& N) R; A( y; {0 a1 S- k- x
5
# i% Q( L7 N* V0 z) y- ^. {$ ]9 [6: h0 g6 e5 m  ]+ A
72 D) u" q0 E$ P1 t0 ]6 w7 P
8
$ l$ m  a  l+ [! z9; g4 G- L3 ?$ Y) ^7 R
10
& ?9 ~& R7 V) f* e$ _: M4 C11& H+ s6 ~/ L) i" J
12( P; `; p/ l( g: o( D
13
  v2 E3 v' Z! b. z/ V14
; f) Z! _9 Q# e/ |+ |15- R9 O# ^$ c9 @" t7 ?$ E
164 l2 p9 D6 y. F
创建集群Monitor密钥。: K* `. B4 ?' |, t" I% ?
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
8 x( |# x6 b4 K- q1 |. z# h1
4 Q; r8 _& C! A创建client.admin用户、client.bootstrap-osd用户密钥,添加到集群密钥中。
& j/ [+ q4 x7 |! P9 v  P! C* v' G, _4 Pceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
8 ?- {; x5 V2 ]# Dceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
5 I, W: C  h  s, k7 N! s* H+ C6 i1! k8 G- w7 b9 F( `, z) h: H4 Z9 F
2
2 t7 C# e8 g  |' Kceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
+ Z7 Z: V8 x: D2 X; N4 i$ Qceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
9 g. [- m5 B) S* s2 l# M1
7 z& h' ^# _5 a! \9 R28 I6 a/ K3 ~$ i/ L, K# R! V
使用主机名、主机IP地址、FSID生成monitor map。
' B& ]4 l% L; N2 E* I: smonmaptool --create --add ceph01 10.40.65.156 --add ceph02 10.40.65.175 --add ceph03 10.40.65.129 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap% A9 Q- Y" d) G  T! p2 e) Q
11 [& K; I( O; o5 O; k
初始化并启动monitor服务
6 \& t5 x* |  S9 u( h0 msudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph01
" V/ y6 u* b" h$ m- X( zchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap' _. \+ K; h3 \+ a5 B! ~0 U# ]" K
sudo -u ceph ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
0 j  N; X5 g" ~+ P4 Y  Fls /var/lib/ceph/mon/ceph-ceph01/8 v  j4 P- `6 }( t& ~# z5 s9 n. w% T
1, q" E9 g8 V1 ]5 a6 \' D( [
2
0 N* N4 x5 L/ O% Y( \/ O: B, v3
5 ~8 w6 Z$ b' M# W- j! {  T4! k3 k" C. z; N
systemctl start ceph-mon@ceph01  _+ g) d( d8 D5 y5 Y
systemctl enable ceph-mon@ceph01
: F$ D3 J3 y# b6 f. T9 m  nsystemctl status ceph-mon@ceph01, n: G+ c. J5 U6 L4 k; t3 @6 w) J9 c
1. G/ p, L8 N8 j0 v$ I
2
/ z: S% S, F4 s6 p4 v3 i- I, U38 v+ w+ }+ x' [. ~
同步配置文件、密钥、monmap到其他节点中(Ceph 01执行)
7 ~* M/ M& \$ T. S" [; A: }/ j复制ceph.client.admin.keyring、client.bootstrap-osd key、ceph.mon.keyring、monitor map、ceph.conf到另外2个节点
' h3 J9 ]- m) P. [) i1 t7 kscp /etc/ceph/ceph.client.admin.keyring root@ceph02:/etc/ceph/) G7 w! Y% e4 B+ i" K" e6 E1 c
scp /etc/ceph/ceph.client.admin.keyring root@ceph03:/etc/ceph/
7 U8 G+ E6 K. i3 J2 {- @) G9 T/ n1
# C) a1 c0 s9 r4 w3 |29 O( c8 @8 M1 L8 i
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/7 p; x$ `: J4 \: S) P+ V* x
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph03:/var/lib/ceph/bootstrap-osd/. K* @4 u$ l; t( j
1
* I6 l/ y* ^- d22 S2 J+ S0 O" _+ }
scp /tmp/ceph.mon.keyring root@ceph02:/tmp/& d# b( v4 y) C/ ^5 P( M9 T
scp /tmp/ceph.mon.keyring root@ceph03:/tmp/0 J, r- _9 q0 D  U, |' O/ B
1
, v8 z8 D8 A# T# t% E2
' l6 {; A" `" R+ p8 O3 {: K8 e! Sscp /tmp/monmap root@ceph02:/tmp/) V- |7 m* R, `* ?5 t
scp /tmp/monmap root@ceph03:/tmp// L  H+ [' I$ K  `" s1 L
15 C' L% C+ ]! {) y
2' B6 }) [" \+ D. K
scp /etc/ceph/ceph.conf root@ceph02:/etc/ceph/9 b: m8 A4 m7 e& r6 A
scp /etc/ceph/ceph.conf root@ceph03:/etc/ceph/) @1 c7 X( i$ G% p' g9 ~
1
- u2 X* Z9 B% ]% I; y+ o; Q; K4 q28 Q, c4 }6 A6 q1 k+ S5 O
启动其他节点的monitor服务(Ceph 02执行)
: Q8 n; R- p8 B1 f% h4 \sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph029 L0 |; u, [$ ~+ w
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
& h0 L4 R: ?  F8 T2 Q2 C. N  g# ksudo -u ceph ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
6 R9 e5 z5 c4 p- d: G" |( Jls /var/lib/ceph/mon/ceph-ceph02/5 M" r$ ^( o2 O1 m0 w  p
1
* V% R3 x; q8 t* Z6 r5 z( ^0 ~5 M  j2. \7 M1 s7 ?; B  X5 ~. B6 [! A
3
/ @# `6 `* B( a! w# N1 ~  u( E4' w) }' N% A5 y. m# ^3 D! ?# B+ g
systemctl start ceph-mon@ceph02
5 t4 w  j5 L) M* Wsystemctl enable ceph-mon@ceph02
5 \7 D& w* u8 t* M/ |7 lsystemctl status ceph-mon@ceph026 i9 d1 A% a" q4 q3 ?6 }
1
7 X( Y- G% t2 h2
* t9 g; K, z( Q3, H( _7 \( T+ M0 Z3 O
启动其他节点的monitor服务(Ceph 03执行)2 v: \  z  @' T) v$ n" j
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph03
2 u  d* U4 R2 X: p+ q* Bchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
6 y1 U: v- V$ [$ Z# y) N0 psudo -u ceph ceph-mon --mkfs -i ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring, y% t% [( d: }7 e6 B
ls /var/lib/ceph/mon/ceph-ceph03/
# u! x: B9 ~/ L/ W5 B* d3 n5 ]1
" v3 _/ V4 m  K2
& m4 E  y1 C' I2 G; W" U3' a4 Q" K/ a& f$ f4 T! z
4
( S. `6 S7 R+ _4 E6 Isystemctl start ceph-mon@ceph039 K1 F8 V( l5 ?; ]" Q8 u. s
systemctl enable ceph-mon@ceph03/ v8 c+ S; I7 K9 Y
systemctl status ceph-mon@ceph03
2 ^0 p0 z% A  T9 I: X! f5 f1$ D; E$ n* m6 |! e" J
26 c% X) h. X5 W9 f
3
! d5 e; D4 A1 `/ H# y# P/ t查看当前集群状态(任意节点执行)
  }8 {: t) _2 @通过ceph -s命令查询集群状态,可见集群services中3个mon服务已启动。8 a# g3 Q& M% P. x- S& E' q
ceph -s8 x. _5 Y! ^! Q2 U+ @
> cluster: 7 N% g( v2 R4 r) H1 v: T
>   id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a  4 C0 T( m3 x( Q- g) U1 r# r6 t" q8 S
>   health: HEALTH_OK
9 J$ l; V+ r, w, t/ i4 E>
" W2 B4 p) J! h> services:  
: \9 g4 {6 h) k# M1 n>   mon:  3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)  : Z& _1 c0 q9 W9 l( K1 x
>   mgr:    / I4 q& R# P0 V
>   osd:
2 F/ ]  o$ e& w( @; n- B> ; H5 D( _- K# Y8 T3 |/ [. C5 ?5 p2 m
> data:( e9 h* w# ]6 g$ ^4 A2 [- x
>    pools:   ' _$ k7 b2 @$ g# p5 m. F
>   objects: / m7 }# E" \' _$ W
>   usage:   
) q) g* {" z: ]  a4 `- f# r>   pgs:, c& Q) R, C" E! @
1
5 G5 Q* A! Z0 y: z2) m$ d$ C" X/ N
3
: w* N% l, S: M/ j  j! X4
8 ^) c5 P; {% F5/ c: n* ]3 T( y. B! C: M/ p/ w8 ?
6
  `/ b) j: @4 `3 H9 _74 D1 `' \5 _- t; J/ ?- I
8" C& u" i( f4 e7 G0 d. v5 D
9' E( m" |9 Z; X
10: h0 w4 b5 p$ j* u
11
& a+ g  G5 g' e2 r! y' g6 T$ P4 |# r129 z- K4 E# a" n! K
139 z4 N% J) P$ W6 E# c0 n5 t4 h* a! I
14
% |4 i7 c9 ]% S# k- I15, T: ~! h7 c: ~& @9 a
部署Ceph mon服务(ceph-volume 自动化创建)
, L% X6 \6 O7 G7 `" y安装Ceph-osd服务程序(所有设备执行)( O0 I; i6 ~' U! ~7 M8 L7 k" }/ k) \
yum install -y ceph-osd0 I7 B0 y( L0 X% m& h
1
% v: v( J1 Q. B9 y: b初始化osd服务(所有设备执行). R& B6 m% N" f( e
通过fdisk等工具查看磁盘盘符,然后利用ceph-volume工具自动化创建osd服务。
) s. s" j1 F$ c' ]6 \$ y$ Dceph-volume lvm create --data /dev/sda9 ^7 a9 k& ~) o8 n0 l
ceph-volume lvm create --data /dev/sdb  R5 n3 K8 ?) h4 k
ceph-volume lvm create --data /dev/sdc
$ h9 n7 p. W. Q) t1
! U0 p+ [. ^2 P6 X3 h2. k) {# u0 G' o& C& L" A  ^( b
30 {0 i, p0 t% j( S" i' X+ n
查看当前集群状态(任意节点执行). Y5 C5 P& R" k' ^! b
通过ceph osd tree命令查询集群状态,可见集群services中所有osd服务已启动。
% \, t; k" p  j1 r4 U0 j6 o1 ?( _ceph osd tree
/ y- H( m6 j! {$ {; i; A  g> ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF9 o  m1 O  r: h: J9 }7 a9 y
> -1       16.36908 root default0 T# j6 [0 a5 y6 Y3 [' t( E9 ?
> -3        5.45636     host ceph01
3 [; `/ C6 L* [" a>  0   hdd  1.81879         osd.0       up  1.00000 1.00000
8 J/ f6 E' s) N; P* n>  1   hdd  1.81879         osd.1       up  1.00000 1.00000
% _, }1 {1 @$ |, d  s4 Y3 G>  2   hdd  1.81879         osd.2       up  1.00000 1.00000, _' [/ y2 [3 J+ @2 H
> -5        5.45636     host ceph02" \' v3 ^5 V9 c" \
>  3   hdd  1.81879         osd.3       up  1.00000 1.00000, x5 `% }' y& I' B8 R
>  4   hdd  1.81879         osd.4       up  1.00000 1.00000
& h) _& u6 U8 p. u: y+ s>  5   hdd  1.81879         osd.5       up  1.00000 1.00000
- g' h1 \- ?' j, k> -7        5.45636     host ceph035 d. v2 O, B  c1 v# z
>  6   hdd  1.81879         osd.6       up  1.00000 1.00000
: X) u8 o5 s  P9 F>  7   hdd  1.81879         osd.7       up  1.00000 1.00000& l" ]. M9 B) ?" I3 ?4 a- P
>  8   hdd  1.81879         osd.8       up  1.00000 1.000004 @- p, ]& t9 G. b/ ]) K
1$ U' {( M. l: |) f7 R
2. ?6 N( c, U7 v& Y
33 o$ [# Z( i. }0 L5 d; ^
48 h: m4 a: V, h  Y2 i
5
/ R1 Z0 O) V5 @7 `6
+ i8 J7 c. \; h+ M7
4 [9 c! X6 a; _( x8" t) {) ]: K, u. V1 q
9/ }  H$ \5 Z/ Q  F
10) z4 E5 {/ u: p3 [& D7 K5 l: f) E
11
8 n0 n& s4 [% r7 h12
; D+ y: K  t% k# H8 L' ^0 z13
& D3 V8 _% D1 V# s14' x6 x! U: @6 u4 @5 T
15
7 {+ g+ t$ K  |6 C, X9 \& l部署Ceph mgr服务并开启Dashboard, ]2 i; }4 R" n' @/ Y8 x4 v' u! l8 g
安装Ceph-mgr服务程序(所有设备执行). _2 Z& I1 h, P% d9 N( }9 v
yum install -y ceph-mgr! u; F% l5 ?& G; |8 L) o+ D
1
8 y1 g6 e: k- D3 s初始化并启动主MGR服务(Ceph01执行)
1 T3 K9 E6 ?2 y$ X; u( pmkdir -p /var/lib/ceph/mgr/ceph-ceph01
1 G/ ?+ i0 j; c* y3 T$ _9 i/ ~chown ceph.ceph -R /var/lib/ceph7 V* X/ @' _, K& L( Q( s
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph01.keyring --gen-key -n mgr.ceph01 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
7 W/ Q- C* z% @- a5 n% [: mceph auth import -i /etc/ceph/ceph.mgr.ceph01.keyring$ Q  u$ u" _  \, d0 U" T
ceph auth get-or-create mgr.ceph01 -o /var/lib/ceph/mgr/ceph-ceph01/keyring
5 a2 T# x" K; T; F, G# t10 s2 j. q  n$ ?  D
22 q3 Z7 g# T4 J6 b1 V. M
3& c0 F0 i, c7 Z& p$ V6 |
4; o* c- n5 V) y6 o& ?* ^7 S: y
5. p0 Q+ M* y# j, d' U/ C" z
systemctl start ceph-mgr@ceph01: m% m8 R+ T: C- `
systemctl enable ceph-mgr@ceph01
% r5 C# r" m( I/ j% fsystemctl status ceph-mgr@ceph01  `6 Q6 G: u4 P/ a% P
11 O6 ^4 f8 [% V7 y2 ~. G; o: O
2. v7 L' Z/ T, ]2 e# k
3
# ^7 n" b6 h5 A- C9 c* F8 `初始化并启动从MGR服务(Ceph02执行): [- K: ]0 j& K5 Q6 @3 B
mkdir -p /var/lib/ceph/mgr/ceph-ceph02
( _$ }0 q6 A6 v8 k, `: x) cchown ceph.ceph -R /var/lib/ceph
* _7 I- ~* g' Q7 W/ z) X) aceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph02.keyring --gen-key -n mgr.ceph02 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
9 d/ D2 [8 D4 I8 ]' jceph auth import -i /etc/ceph/ceph.mgr.ceph02.keyring' p# c1 Q7 ^5 }: J
ceph auth get-or-create mgr.ceph02 -o /var/lib/ceph/mgr/ceph-ceph02/keyring! O' O  f2 N1 r. F' ^( f
19 p: ]/ Y0 ?- r% x: m8 n6 P
28 R. m) p+ V) i; _6 D+ F# C4 j
3- g: u4 [1 I) {, j1 `& o) Q
4/ s4 r0 M+ l3 R) b
5
) M2 }6 V# W* A, ?: A4 Dsystemctl start ceph-mgr@ceph023 r$ \. h$ k/ T( Y+ |- i: J
systemctl enable ceph-mgr@ceph02
; i4 z, ?6 |- H" t1 q4 G6 Vsystemctl status ceph-mgr@ceph02$ ~0 O$ m2 R" \8 u# E
1
; G" _. X; \3 ?* m2
4 g; d, h( t. `3
; F% \! [7 x3 k" R4 a7 _初始化并启动从MGR服务(Ceph03执行)
5 P9 O$ F3 {) ^' Z4 umkdir -p /var/lib/ceph/mgr/ceph-ceph030 u+ m6 {) t+ N+ k0 f0 ]" a5 V
chown ceph.ceph -R /var/lib/ceph1 p" m  h$ `* a% T! Z
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph03.keyring --gen-key -n mgr.ceph03 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'2 P( I9 @6 e- ~; |* z8 @4 f
ceph auth import -i /etc/ceph/ceph.mgr.ceph03.keyring
! X! X" t# B' p+ N4 j) P' ]( Sceph auth get-or-create mgr.ceph03 -o /var/lib/ceph/mgr/ceph-ceph03/keyring3 I. i$ R+ \  D8 W
15 i+ E0 k! G0 g2 s* \9 \0 o
2" b) N6 r7 W" U: {" a& a
3
( B8 v9 l8 A; o3 @# ^% o$ a) T4
2 c: g" I1 f  ]5
( J: A$ D: ]6 Rsystemctl start ceph-mgr@ceph03
% c) ^9 m5 H. |% d  q8 p: rsystemctl enable ceph-mgr@ceph035 ^$ x+ ^( G8 `& @
systemctl status ceph-mgr@ceph03
, O% G$ [( J) e1 m# F. }5 c14 h. j. }% ^" A* P
28 v' ?5 R% d, U2 L4 M+ Z
3* W% w1 t+ N) ~* @
查看当前集群状态(任意节点执行)( c* k5 z3 _7 T* h2 e) i/ N
通过ceph -s命令查询集群状态,可见集群services中3个mgr服务已启动。
! {& x# l% `8 J' Z% }. k3 iceph -s
* W' [& O+ l+ s2 k1 A* X> cluster: . X9 W+ d% i+ s, o! w& N
>   id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a  
) H+ V8 X$ }9 z>   health: HEALTH_OK
! O7 ~* M" h) j5 h) j$ K- [& T/ u>
& u% c" q: G+ t6 I: w1 ^> services:  
+ r6 v+ e9 ~. D>   mon:  3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)  3 ]7 M# ?4 s& b3 w
>   mgr: ceph01(active, since 3w), standbys: ceph01、ceph02   
/ W( e% z& g4 }>   osd: # ^2 R/ L3 X& [, `
> 7 x( w7 B/ b* m5 \! j  h* H9 J
> data:
) B( L8 P' y) f# {6 R3 c  z, U>    pools:   
: P# }4 q1 L& a6 @9 e>   objects: 6 l8 R( `! }+ U" W  y" K1 m6 n
>   usage:   9 @' x( b8 u7 H8 Z& Z& w' O! c
>   pgs:8 X) e4 S- A# a- `
1
0 w, \' `3 P- {0 T! I: n% |% R% t6 d2# }, o: _' L5 O+ {8 H' V+ C
3( L& s2 o& V. S
4
( Q8 o6 E# U# O2 O& U, s5
9 i7 }) R8 {( B, F  f2 `6- t2 n: w0 h0 M  s2 B5 e8 a, E
79 d+ m9 _2 n% U
8+ d/ I) y5 d, R2 p) B9 d
9+ ?) ~  f- o5 ^0 }2 F
10
0 n, F/ @; ~, T9 ]11/ }' X6 w) A* ]* Y
12
5 l7 S9 ~  L- U- w6 y139 @8 `- X& ~* `0 s* z" l: d- D( n5 n
14$ A  t5 [" u% w: Z: a
15
5 n3 v: E3 L8 a1 B使能Dashboard访问功能(任意节点执行): e+ X0 S8 @8 n
开启mgr dashboard功能8 p. o8 {0 }# j3 p8 D7 j- B( P
ceph mgr module enable dashboard
0 K! F6 O5 d3 o) `6 q17 y+ V; d- f$ x3 D- R" w9 v
生成并安装自签名的证书
; l2 q- b1 Z! g: P5 f* gceph dashboard create-self-signed-cert
% m$ U" j. s8 [' N' W1 t1
" u1 g+ ~- f# q配置dashboard
- L) s# Y* ^8 L9 E, a, mceph config set mgr mgr/dashboard/server_addr 10.40.65.1488 Z" W" T  r& N: t- d
ceph config set mgr mgr/dashboard/server_port 80805 r9 r. @4 T  e6 S0 e7 h0 `2 g8 ~
ceph config set mgr mgr/dashboard/ssl_server_port 84438 H4 _& Y7 y5 N* U9 ?! w
1
. p* [* u8 J; M) C0 \4 f28 v+ W9 M. U& [, S
3
2 m( S& z7 C) P1 B' I创建一个dashboard登录用户名密码
# e6 o7 E" ?' \8 X' b6 V& Aecho '123456' > password.txt
% v# n* Z" y0 A1 g9 k3 V* g0 Fceph dashboard ac-user-create admin  administrator -i password.txt" ~& B8 e  s% I- J
1
+ l6 u0 M& @, g. T% g. }2 j, @! C" g2
- A& a* f# `  t' ]1 O. d查看服务访问方式; ~" O5 |# C- B& v( x  `5 F
ceph mgr services
5 x9 }4 ?/ P* J! |% H* O8 p13 k* P  W) e& m" @- l3 @
通过web访问Ceph Dashboard,用户名密码为admin/1234562 ?9 @% x0 @. g. L' a. K
https://10.40.65.148:84431 R2 F2 z! u7 q# B8 E* i
, }! Q# z8 Q6 S/ Z, [* l
 楼主| 发表于 2022-7-20 13:47:56 | 显示全部楼层
部署Ceph mon服务# m$ `( T  P5 N- [
安装Ceph-mon服务程序(所有设备执行)
' S/ S2 k- h( D7 j. l+ {& _+ u  _  {7 s- X) e
yum install -y ceph-mon
( y: C# V  `: x0 H- _14 U* _0 H. K% v  s8 c( y6 h
初始化Mon服务(Ceph 01执行)6 P( W* t. i8 {3 R
1 J/ i" ~) F. _% r: i5 m$ h
生成uuid0 F/ f% n! Y% W, V7 p+ q8 X; h

3 z# P" Z5 c9 @4 b- B' b( l$ muuidgen
/ v7 _% W, H. k6 s, Q> 9bf24809-220b-4910-b384-c1f06ea80728
2 @0 y. ?' W4 n. a( F1
. Z3 Q  z2 X4 X4 o) Z8 \* ~2
+ u4 I5 m  e9 ]4 X; B创建Ceph配置文件
+ K: z3 Z5 u) ]$ V* w0 A% b' D9 L; q, b, q
cat >> /etc/ceph/ceph.conf <<EOF
9 o' w+ `8 o9 W* C, c* f1 O[global]+ j( w- k/ m# H+ W7 B; i5 ]
fsid = 9bf24809-220b-4910-b384-c1f06ea80728
! b5 B) ~3 H0 `/ @, Z$ l: h1 Bmon_initial_members = ceph01,ceph02,ceph03$ H% v( [3 e9 U# Y0 W2 ]
mon_host = 10.40.65.156,10.40.65.175,10.40.65.1298 A* z6 u1 p6 ^7 y
public_network = 10.40.65.0/24
" C' [0 \" a( Pauth_cluster_required = cephx
' H/ X( r* r! E4 Rauth_service_required = cephx' i6 n: D, X; F8 A) t1 V1 x& A) R* k& J
auth_client_required = cephx" z+ f+ b4 W% v1 _6 I" A
osd_journal_size = 1024
- Q; H  d: F( b" h/ Qosd_pool_default_size = 3
/ {5 z: e$ Q$ v5 ?: c$ Oosd_pool_default_min_size = 2! }& }9 V6 N% B
osd_pool_default_pg_num = 64
  d  l5 Y$ C+ h! Josd_pool_default_pgp_num = 64
6 Q- y" ~" w& W0 d4 t& x. Sosd_crush_chooseleaf_type = 1
+ h+ y5 A8 o! S, e8 H/ REOF
; x/ e- [  ~& H, S4 J$ ?( \1
" ^) ?- g& U# n1 h4 Y2" i: d9 x# }, L/ g( X
34 b8 F/ v- I; k6 [" V7 y3 k
4
' J9 e& p. U8 {, u2 p+ y& J5
0 V/ |1 v5 P: K: m1 X5 o/ _! ~6
2 l4 J" ^" Q  I7 c1 ~' k4 ^+ i75 u3 g  r+ v1 I0 o+ v- a
85 s7 X, C6 f( N$ U& r
9
5 r2 v# l) `+ i9 j10) f* J  p, ^" _# I0 M
11
% P2 z4 l- B3 r) s* k123 Z% D( m; c8 W& N
13
, T3 M/ g! u5 i5 b6 _3 l* D1 H0 F149 ~& d) N! Z& N2 S# }- N
15
9 S# }  U3 Q8 Y. y9 t16
3 y8 _3 ^' P) S$ n创建集群Monitor密钥。- C: j- ~1 Q- w# k( l! T

9 i4 Z+ j4 L8 a* x( ?ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
# @) [. o3 `3 N/ [$ E6 p; u1
7 }2 J+ P! i9 T& |8 C! W% S! L创建client.admin用户、client.bootstrap-osd用户密钥,添加到集群密钥中。
; z+ z* ?3 @+ a$ b* `
. f* D0 U7 x% o6 T3 d( [ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'0 S, F) s. ~6 l. l* o
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
% z. _: M  O- l& ?, W1! i0 D; Z# i7 U% D0 o& X
2
3 V- r6 U$ v- J2 x/ d6 W7 v( ^ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'" Y" L3 H: M$ _) w
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring. @$ L5 C' M& d' n! l8 [$ q
1
9 g* F6 v% L  o& p7 Y8 T. e2
' \( z$ M1 i' O! `% F使用主机名、主机IP地址、FSID生成monitor map。# i, {" R& ^* _# l
; a! H* ?" R, R7 E
monmaptool --create --add ceph01 10.40.65.156 --add ceph02 10.40.65.175 --add ceph03 10.40.65.129 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap0 m) Z# D9 o& I* Y! S# E0 e
1
8 l: b0 a  B, h" T/ |初始化并启动monitor服务
0 b; b' h7 O0 T! P8 g* ~& @. ?9 d+ E0 r* E6 }6 p4 e8 |7 X  ?
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph01# y$ }3 m& J4 K1 @5 r5 K
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
' ]* U1 ~! i, w, g7 q7 Usudo -u ceph ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring5 S+ e( r7 E8 Y' A
ls /var/lib/ceph/mon/ceph-ceph01/
0 k; H% ?+ y2 G4 ~, a1: q. z6 f7 v8 d, {5 H! s7 J
2& ]# e5 V4 w1 m+ w$ v" u& x
3! m( N  \  c- X* g( {" X
4
1 r1 }2 I: M3 @8 A! p: ^3 dsystemctl start ceph-mon@ceph01
3 N; B1 y& T4 B/ W1 r! Xsystemctl enable ceph-mon@ceph01- i/ f& h1 p* H; |7 Q+ b
systemctl status ceph-mon@ceph01
  {6 s! P" |# Y, @1( S5 H* D/ Z4 y6 o+ M' V+ |0 ]
2, Q. s( P- t. E6 k2 B
3* z% t/ M* ?4 n; v. G1 {2 U
同步配置文件、密钥、monmap到其他节点中(Ceph 01执行)
, H' L7 l  Z2 s0 C
3 K7 a  {1 p) _  L* |复制ceph.client.admin.keyring、client.bootstrap-osd key、ceph.mon.keyring、monitor map、ceph.conf到另外2个节点4 F+ h) d( W' [7 C6 a
3 O8 h2 `9 {) [& C  C& \5 P
scp /etc/ceph/ceph.client.admin.keyring root@ceph02:/etc/ceph/* Y, ^# m6 l' W* O+ [7 g7 t6 Z
scp /etc/ceph/ceph.client.admin.keyring root@ceph03:/etc/ceph/! Q/ \; M5 K) \3 O# u
1
% h8 i$ A! t) ~8 o/ b9 l  J1 b2
: T/ P4 g" q& [7 ~& ~% Ascp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/7 f, F9 ]" T& C0 \, G6 q- h4 }
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph03:/var/lib/ceph/bootstrap-osd/
# H8 R( S) C4 \11 a0 W& N0 }9 ]( E8 n$ a: g
2/ c. U, ~2 n0 i$ U
scp /tmp/ceph.mon.keyring root@ceph02:/tmp/' o7 w9 i/ [; \. ~8 s9 t
scp /tmp/ceph.mon.keyring root@ceph03:/tmp/
' p* l5 F/ ^; W6 g: i+ _' E. s10 h/ Q: {9 ~: v" r
2% T6 G& c7 @$ r4 D/ x% i/ f
scp /tmp/monmap root@ceph02:/tmp/( J% V, q% q: W
scp /tmp/monmap root@ceph03:/tmp/
2 v6 v6 }) m: U2 ~4 X8 B, Z1$ ?% g0 J. w# g0 ?/ U! l
2
2 b- }. p- l$ Z3 f/ |scp /etc/ceph/ceph.conf root@ceph02:/etc/ceph// O1 q& Y# z" n# I$ O
scp /etc/ceph/ceph.conf root@ceph03:/etc/ceph/  I( h: N+ i) Q* m/ R
1' B: x- C; T% H! i% y5 J
29 s( |' h5 s' U% b- P
启动其他节点的monitor服务(Ceph 02执行)
2 [, w1 T2 B% }, V0 d2 A) u# B; B) D% ~% a
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph02
6 }, {% a+ L, Gchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap6 y6 G5 q6 M. E) R2 {" G$ L% h" n
sudo -u ceph ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
. l% ]% u) M$ [1 A' X. j1 R" dls /var/lib/ceph/mon/ceph-ceph02/
. Q5 [) v- Q: x, o- d3 c$ G3 O1& w  z1 k( F! K* }; P4 x9 h
2
5 ~! ^* w+ S3 P8 T1 E( k3/ J4 \4 z% b- Q8 `! T
4) T5 u9 T( o% ^2 C
systemctl start ceph-mon@ceph02& g, w3 w- O9 R1 l
systemctl enable ceph-mon@ceph024 D: a2 b6 i0 p. {
systemctl status ceph-mon@ceph02* u" v( x/ O4 T6 d: Q8 u( t
17 U0 o, v, q1 |
2
: r3 j2 I! P6 K$ k+ Y3
; [  U, X( W- ?- z; [启动其他节点的monitor服务(Ceph 03执行)
2 O9 W$ O0 Y7 Q+ m% m6 [
2 x% k# N3 S) F" Qsudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph03
- H+ H( `- Z9 g7 `5 C! q6 v; Nchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap* d) L& p* |. p. E3 R7 K" Z
sudo -u ceph ceph-mon --mkfs -i ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
6 p/ G' e- \5 ~+ c! {9 ^: B) F+ K( Zls /var/lib/ceph/mon/ceph-ceph03/
( C( M3 M1 e7 P" y1: @$ B; b. r; `' z& k0 p
2; P8 L5 X' Q, J; K# U; s, [
3
# x# W1 }) q& z" q; f# @4+ u. B8 B( R# G0 ]( I
systemctl start ceph-mon@ceph03
$ T* P) E& d5 i! Fsystemctl enable ceph-mon@ceph03! S% g; d: y) U: L) r& X) z
systemctl status ceph-mon@ceph03# ]0 p; l* Q' |
1
- k% E1 Z2 t6 x$ D# \" `$ v* |2
: d- t' Y- f7 X, x1 j39 ~& V6 y$ h( _) H+ w" A; O! h
查看当前集群状态(任意节点执行)2 j# S" ^4 k+ S* j* }, S4 j
通过ceph -s命令查询集群状态,可见集群services中3个mon服务已启动。* W3 x; i: W5 B
2 k* O0 _1 h  f* h5 Y$ {" g
ceph -s3 _8 Z3 Q! u7 ^; e( `/ \3 \: q
> cluster: 7 |/ r0 H( P5 `5 h# D3 t, H( \' ]
>                 id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a        
, ]) y' p3 v+ ^3 D7 G) j* r>                 health: HEALTH_OK" B" P$ O# X  u
> 1 E/ e% E; N# ]
> services:        
$ E" d4 X! R+ p>                 mon:         3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)        
' M. E7 c' I! [>                 mgr:                               
" G1 X6 _+ @) p>                 osd:
9 P0 _1 A- b+ c: [4 g& f> / S1 `6 Z7 I" u/ v! m3 C
> data:
/ h) Z# D. ^4 ^7 p4 D>                  pools:   " j; r6 ^+ P- i9 s5 I. x# W
>                 objects: * b! K; y8 u. W- k6 E. @* M
>                 usage:   3 {1 @' |; H6 d9 }" l/ |
>                 pgs:
3 [* m% o6 U1 D2 u: k! a6 _1' r5 Q3 ?6 b+ \
2* o  t5 A0 Q7 [6 F' P  f1 \% F, c
3% n( h0 g3 a& B0 M
4
/ _' S' l2 m6 I1 c4 W- S$ R* ^3 K: X5
6 _* X2 ]) S6 D61 n. Z. x' K7 ~7 L
7
6 Y$ f' X5 n. \8
) A6 j, r/ B  \9* r: c4 c2 s4 H: a/ o0 ]4 L  }
100 o4 v4 U. {8 r( k+ ]
119 B6 C9 e+ q& U3 f6 H1 ?
12
) f( r( `& b& i9 W13" Z6 \$ h6 ~# q: V
14
+ a5 e/ n( O6 C- m$ p15
7 `! B6 x: A8 l) L2 L7 w. b部署Ceph mon服务(ceph-volume 自动化创建)( ]! ?9 x) k2 j0 g" `7 b  f8 \) t
安装Ceph-osd服务程序(所有设备执行)
8 C+ ~1 ^2 o9 V" x% Zyum install -y ceph-osd
* _$ |* M5 m5 a& h- u, v: X8 V: k; u1  F) q; c  E; z% H+ i& `- Z
初始化osd服务(所有设备执行)
% e$ X8 m+ @7 K; x8 P5 b' l5 {通过fdisk等工具查看磁盘盘符,然后利用ceph-volume工具自动化创建osd服务。
3 u1 ^4 ^( T1 P, {ceph-volume lvm create --data /dev/sda
  s8 N  c9 H4 P2 V2 m# Hceph-volume lvm create --data /dev/sdb7 Q7 G7 w$ K3 B$ B) @' i
ceph-volume lvm create --data /dev/sdc
# ^6 e/ ]7 R4 X& `, N1+ z- M" l9 o1 w' ]1 }
2& E) _1 k" {. Y9 @8 X7 o7 T) N1 G
3# z$ N1 o9 Y1 D: t4 q# K" N5 f; j
查看当前集群状态(任意节点执行)4 S1 O9 B% A5 C7 \
通过ceph osd tree命令查询集群状态,可见集群services中所有osd服务已启动。. y; Z) A- N0 F) t
ceph osd tree( w5 j2 q( y8 w9 b
> ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
9 ?" W) l! r/ i0 C7 Q5 F4 p$ H> -1       16.36908 root default
1 E0 }" }0 u$ b& ?6 s5 z, Q0 h: N> -3        5.45636     host ceph01+ Y1 e- ], c3 [4 j
>  0   hdd  1.81879         osd.0       up  1.00000 1.000004 @  Q$ e. `' H- y
>  1   hdd  1.81879         osd.1       up  1.00000 1.000001 g- M- F, d3 H1 N
>  2   hdd  1.81879         osd.2       up  1.00000 1.000000 O4 N* H; Y* P" b8 q% H' R9 q
> -5        5.45636     host ceph02; A$ ?8 A4 l5 O. G; x  d* M9 }" A
>  3   hdd  1.81879         osd.3       up  1.00000 1.00000* Y1 r. s. L7 C+ b2 e9 G) B
>  4   hdd  1.81879         osd.4       up  1.00000 1.00000
1 q* G/ G$ N% o9 L6 y>  5   hdd  1.81879         osd.5       up  1.00000 1.00000/ P$ d- r- x8 ^9 f! p1 W5 c
> -7        5.45636     host ceph03% Q% `. v' Z- z3 e" {! P
>  6   hdd  1.81879         osd.6       up  1.00000 1.00000
  g; y8 _3 @  t2 M$ K' Z5 u' N>  7   hdd  1.81879         osd.7       up  1.00000 1.000000 Z; S; R/ J3 M7 o! v% m; R
>  8   hdd  1.81879         osd.8       up  1.00000 1.000006 J- K/ _4 t9 e; a9 n4 e  P& U7 _
1
$ x: {2 P5 s& W4 }! V2. p  f7 s$ B2 r- F8 _
3, r, q4 f4 L4 G
4
* e/ \0 d2 ?  D% m5" \9 }- a- M' s( J+ w' ^: a
67 T2 ~- g5 h! O2 c: Y5 s  U5 d
7
$ z9 h/ g* y( s& Q8; G: R& U$ W7 n# W
93 c% H! s$ S% ?- _# M
10
( h3 z  a. Z* M  \7 H110 |2 `" }; {1 w0 `
12
& W, R8 o2 Y2 L+ k13
0 n4 `. n9 x6 H) P! _! R141 J, j1 v8 p* a& n# R
15
0 W2 @, U4 g/ S' M部署Ceph mgr服务并开启Dashboard
0 H2 E: A$ w  Z2 n: d3 c安装Ceph-mgr服务程序(所有设备执行)2 [2 x# Z, t; i4 |
yum install -y ceph-mgr
+ c  x2 j4 H( W% P1& d9 e2 h/ R$ }9 s' m5 ~
初始化并启动主MGR服务(Ceph01执行)
8 K0 Z  u! I7 ]4 Q9 hmkdir -p /var/lib/ceph/mgr/ceph-ceph01
1 D& V* c" ^) K" J1 Pchown ceph.ceph -R /var/lib/ceph  G; h1 D: K" j; \' V' g
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph01.keyring --gen-key -n mgr.ceph01 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
+ C. U! P7 T- f4 L  hceph auth import -i /etc/ceph/ceph.mgr.ceph01.keyring
' e/ _! ?0 i" S" E8 v3 J/ k& {ceph auth get-or-create mgr.ceph01 -o /var/lib/ceph/mgr/ceph-ceph01/keyring
) _3 c" V* w4 I# Q* P* b1$ V+ l( [1 {% A( h$ D1 e
2
! G1 [8 V7 e* t1 z3
# [* u% a7 m/ {3 S5 Q3 z# U+ L4
5 n/ x' B+ J" }0 z& ?- }% w- x; r, ^55 }/ g2 q# \9 w! \6 A1 U7 C
systemctl start ceph-mgr@ceph01+ ]; Y1 ^7 Z1 k
systemctl enable ceph-mgr@ceph01& l: M& b( Q: f. z
systemctl status ceph-mgr@ceph01
" t& C8 @+ I1 k1
4 `7 j# h* \) }# u# h% ^2
5 _/ `* Z7 d/ N3) h3 c. T% Y0 m
初始化并启动从MGR服务(Ceph02执行)9 C6 k8 r5 N9 z
mkdir -p /var/lib/ceph/mgr/ceph-ceph02
& w. T  b: ^; Tchown ceph.ceph -R /var/lib/ceph
" E0 }, S1 v3 }' C& |ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph02.keyring --gen-key -n mgr.ceph02 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
- K8 l6 _+ A* N4 G  e( R$ i. z6 Tceph auth import -i /etc/ceph/ceph.mgr.ceph02.keyring3 Z8 M6 c' p. A* ^# f
ceph auth get-or-create mgr.ceph02 -o /var/lib/ceph/mgr/ceph-ceph02/keyring
% q$ y$ |: |0 M1
. h9 |, T+ _9 U, _+ G2) c0 B* v' q% J( e7 K; `# I- O
3
9 {1 V: ]. ?3 T6 Q  f4
" b& i  H/ {% I- A( k: D5
. a& M  e& n5 _0 C! esystemctl start ceph-mgr@ceph02
; B+ L- p: ?! Usystemctl enable ceph-mgr@ceph021 F9 `, a3 ^9 s, O8 i7 _
systemctl status ceph-mgr@ceph02+ g) r3 r$ Z( }. n
1
; X! U, i. N- R% [, C+ [2) z9 z) U& u9 s6 u, U
3
7 X/ X% o, Z) l5 r0 C% l初始化并启动从MGR服务(Ceph03执行)
7 i. G" [8 _4 U/ S5 Q! xmkdir -p /var/lib/ceph/mgr/ceph-ceph03
. o( ~( F8 u0 m( J0 Vchown ceph.ceph -R /var/lib/ceph
. T3 O. ?7 e6 G. `) r: oceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph03.keyring --gen-key -n mgr.ceph03 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
0 [# G0 p# S0 l; ^2 lceph auth import -i /etc/ceph/ceph.mgr.ceph03.keyring6 W$ N$ b3 U4 H) g, `  d
ceph auth get-or-create mgr.ceph03 -o /var/lib/ceph/mgr/ceph-ceph03/keyring: W: I5 A* Q. X# M
15 C9 z  l& ^6 x, H
2! Z) \& G4 s: E/ V6 C7 h
3
' l3 k* }0 S) M4 M8 D. B4
, Q: T! v! U2 i$ y( b, T  W6 N57 y, o7 q& {  m+ r; n# H* v
systemctl start ceph-mgr@ceph03+ e& P$ n0 O- d! L5 |# W2 m
systemctl enable ceph-mgr@ceph03
+ `1 d  L5 E: c' R2 Q  Csystemctl status ceph-mgr@ceph03* [! i/ ~8 k/ l# d, j+ i( B( K
1
7 d9 \( {1 E; N2
% w& ^- ?: A& F$ F  ^3
+ R1 H! U2 @& X0 Z查看当前集群状态(任意节点执行)
% A$ f/ k( Z( Y- D7 b通过ceph -s命令查询集群状态,可见集群services中3个mgr服务已启动。
8 E, g  T4 I  P% R$ cceph -s
; T  Z" W- V- S6 U> cluster: ( J0 ~' V' z  a, e% t% v
>                 id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a        
: s2 J. i, j# ~>                 health: HEALTH_OK
2 h1 @& V5 w( f; n0 D> & n  V( b/ k* |1 z6 N
> services:         ( u) V5 H8 H5 Y! E
>                 mon:         3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)         : X! g1 J0 E9 B  [' \$ i
>                 mgr:        ceph01(active, since 3w), standbys: ceph01、ceph02                       
' O$ r& [7 U; W- l( P/ N>                 osd:
  i, U7 h7 C. f# p0 b& Z3 U> 4 u( Y( ~, O/ b# D( u9 ?9 A% b
> data:) F/ x& E. H% B8 n. z6 Z
>                  pools:   9 q0 ^# e9 o4 E: g9 Y
>                 objects:
2 q, X9 i' p: t: v5 d>                 usage:   
2 [# R1 m$ x% x3 F2 b>                 pgs:/ x+ q" {6 S8 e9 l
1
% g  p& A# C+ U/ r+ G& M2
6 P/ y  m& _0 h4 m38 W+ m& N3 g1 C8 I
4
: H8 F0 _* Z* n9 ?8 L- [* y54 ]3 h4 ^4 m0 o# |( b
69 |- ^6 k# g0 R
7
2 c8 `+ q! [3 O! w: P8# A* }+ c8 P0 |* C1 j5 w
9$ Y# h4 E8 e; @5 u; F  F& |% q% `
10; b% |$ t2 q" H/ _  D* j
11
# `4 W  e, X6 a$ T125 D/ ~/ ~' D$ M. J5 X
138 ?# l( z9 z- P& _. G9 P6 [
14* o5 |+ M. g  E' X* n% V
152 s" @. M  f: ~  {7 X' |
使能Dashboard访问功能(任意节点执行)
6 l5 x. J) m  u; Y/ F( o- e开启mgr dashboard功能: `3 M7 I9 s& p( c% Z& Z1 ?
ceph mgr module enable dashboard
# C# u; |# L: I! F1
1 z4 ]8 ^" T5 ~1 J5 \9 `& o+ A5 o4 T生成并安装自签名的证书
9 B2 P* H+ B$ f; i- N( yceph dashboard create-self-signed-cert
) n% F4 y1 O4 q% h3 T11 g! S7 V9 R2 M7 q% d8 Q
配置dashboard
; P8 I8 k7 P* n) Eceph config set mgr mgr/dashboard/server_addr 10.40.65.148
- N; E8 m% \$ Y0 @/ U. F/ Q# jceph config set mgr mgr/dashboard/server_port 8080
. J3 V9 ?& z3 }3 I: B9 u9 k8 eceph config set mgr mgr/dashboard/ssl_server_port 8443# M9 y, D  K' ?! W& w. a
1
  h6 `4 a9 V, K9 A: Q2
/ \7 L' S7 ?0 h33 G' ^% b# b% C1 }% v0 V
创建一个dashboard登录用户名密码
- D9 o# k5 L% X: C4 J0 recho '123456' > password.txt
9 I  ^/ L5 ^0 ]3 X; l' jceph dashboard ac-user-create admin  administrator -i password.txt
" P) b- c) W  o5 P8 g3 h  H7 c1
6 }8 _2 O4 I  F9 V* d5 `2& F* y6 b) m4 H% p' m
查看服务访问方式
* S7 N$ e. G! t4 u7 I7 }; {! E1 @ceph mgr services
" z* V( C9 ]  u  \& e2 c1
3 `- K# Y* S0 t1 W3 I通过web访问Ceph Dashboard,用户名密码为admin/1234562 L2 l0 f( I2 g+ R# A) X
https://10.40.65.148:8443
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

如有购买积分卡请联系497906712

QQ|返回首页|Archiver|手机版|小黑屋|易陆发现 点击这里给我发消息

GMT+8, 2023-2-1 00:33 , Processed in 0.096488 second(s), 23 queries .

Powered by LR.LINUX.cloud bbs168x X3.2 Licensed

© 2012-2022 Comsenz Inc.

快速回复 返回顶部 返回列表