将设为首页浏览此站
开启辅助访问 天气与日历 收藏本站联系我们切换到窄版

易陆发现论坛

 找回密码
 开始注册
查看: 127|回复: 2
收起左侧

手动方式部署ceph集群以及添加osd

[复制链接]
发表于 2022-7-19 11:20:00 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有帐号?开始注册

x
一、准备
前一篇点击打开链接只部署了一个单机集群。在这一篇里,手动部署一个多机集群:mycluster。我们有三台机器nod1,node2和node3;其中node1可以免密ssh/scp任意其他两台机器。我们的所有工作都在node1上完成。
准备工作包括在各个机器上安装ceph rpm包(见前一篇第1节点击打开链接),并在各个机器上修改下列文件:
/usr/lib/systemd/system/ceph-mon@.service /usr/lib/systemd/system/ceph-osd@.service /usr/lib/systemd/system/ceph-mds@.service /usr/lib/systemd/system/ceph-mgr@.service /usr/lib/systemd/system/ceph-radosgw@.service
修改:
Environment=CLUSTER=ceph <--- 改成CLUSTER=mycluster ExecStart=/usr/bin/... --id %i --setuser ceph --setgroup ceph <--- 删掉--setuser ceph --setgroup ceph二、创建工作目录
在node1创建一个工作目录,后续所有工作都在node1上的这个工作目录中完成;
mkdir /tmp/mk-ceph-cluster cd /tmp/mk-ceph-cluster三、创建配置文件vim mycluster.conf [global] cluster = mycluster fsid = 116d4de8-fd14-491f-811f-c1bdd8fac141 public network = 192.168.100.0/24 cluster network = 192.168.73.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx osd pool default size = 3 osd pool default min size = 2 osd pool default pg num = 128 osd pool default pgp num = 128 osd pool default crush rule = 0 osd crush chooseleaf type = 1 admin socket = /var/run/ceph/$cluster-$name.asock pid file = /var/run/ceph/$cluster-$name.pid log file = /var/log/ceph/$cluster-$name.log log to syslog = false max open files = 131072 ms bind ipv6 = false [mon] mon initial members = node1,node2,node3 mon host = 192.168.100.131:6789,192.168.100.132:6789,192.168.100.133:6789 ;Yuanguo: the default value of {mon data} is /var/lib/ceph/mon/$cluster-$id, ; we overwrite it. mon data = /var/lib/ceph/mon/$cluster-$name mon clock drift allowed = 10 mon clock drift warn backoff = 30 mon osd full ratio = .95 mon osd nearfull ratio = .85 mon osd down out interval = 600 mon osd report timeout = 300 debug ms = 20 debug mon = 20 debug paxos = 20 debug auth = 20 [mon.node1] host = node1 mon addr = 192.168.100.131:6789 [mon.node2] host = node2 mon addr = 192.168.100.132:6789 [mon.node3] host = node3 mon addr = 192.168.100.133:6789 [mgr] ;Yuanguo: the default value of {mgr data} is /var/lib/ceph/mgr/$cluster-$id, ; we overwrite it. mgr data = /var/lib/ceph/mgr/$cluster-$name [osd] ;Yuanguo: we wish to overwrite {osd data}, but it seems that 'ceph-disk' forces ; to use the default value, so keep the default now; maybe in later versions ; of ceph the limitation will be eliminated. osd data = /var/lib/ceph/osd/$cluster-$id osd recovery max active = 3 osd max backfills = 5 osd max scrubs = 2 osd mkfs type = xfs osd mkfs options xfs = -f -i size=1024 osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog filestore max sync interval = 5 osd op threads = 2 debug ms = 100 debug osd = 100
需要说明的是,在这个配置文件中,我们覆盖了一些默认值,比如:{mon data}和{mgr data},但是没有覆盖{osd data},因为ceph-disk貌似强制使用默认值。另外,pid, sock文件被放置在/var/run/ceph/中,以$cluster-$name命名;log文件放置在/var/log/ceph/中,也是以$cluster-$name命名。这些都可以覆盖。
四、生成keyring
在单机部署中点击打开链接,我们说过,有两种操作集群中user及其权限的方式,这里我们使用第一种:先生成keyring文件,然后在创建集群时带入使之生效。
ceph-authtool --create-keyring mycluster.keyring --gen-key -n mon. --cap mon 'allow *' ) A6 w! t) l- S/ ?
ceph-authtool --create-keyring mycluster.client.admin.keyring --gen-key -n client.admin --set-uallow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
# \4 t' ?  F& X% m6 ?ceph-authtool --create-keyring mycluster.client.bootstrap-osd.keyring --gen-key -n client.bootstrap-osd --cap mon 'allow profile bootstrap-osd'
: p. |' E8 k- @* @; rceph-authtool --create-keyring mycluster.mgr.node1.keyring --gen-key -n mgr.node1 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
. ?+ I. d+ B9 f2 oceph-authtool --create-keyring mycluster.mgr.node2.keyring --gen-key -n mgr.node2 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' ; K6 \* A* V& k! u# `: w$ c, u1 B) v
ceph-authtool --create-keyring mycluster.mgr.node3.keyring --gen-key -n mgr.node3 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' $ C' b: X; j2 o7 e: n
ceph-authtool mycluster.keyring --import-keyring mycluster.client.admin.keyring
- W* t$ T# Q- `ceph-authtool mycluster.keyring --import-keyring mycluster.client.bootstrap-osd.keyring
4 M0 u2 w" f& q, {& Gceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node1.keyring
! g8 x. C, B& S: W& a8 l. W; t! Jceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node2.keyring
' m9 j5 R+ m1 aceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node3.keyring( a! l: j. w. a! ?4 A
cat mycluster.keyring [mon.] key = AQA525NZsY73ERAAIM1J6wSxglBNma3XAdEcVg== caps mon = "allow *" [client.admin] key = AQBJ25NZznIpEBAAlCdCy+OyUIvxtNq+1DSLqg== auid = 0 caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *" [client.bootstrap-osd] key = AQBW25NZtl/RBxAACGWafYy1gPWEmx9geCLi6w== caps mon = "allow profile bootstrap-osd" [mgr.node1] key = AQBb25NZ1mIeFhAA/PmRHFY6OgnAMXL1/8pSxw== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *" [mgr.node2] key = AQBg25NZJ6jyHxAAf2GfBAG5tuNwf9YjkhhEWA== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *" [mgr.node3] key = AQBl25NZ7h6CJRAAaFiea7hiTrQNVoZysA7n/g== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *"五、生成monmap
生成monmap并添加3个monitor
monmaptool --create --add node1 192.168.100.131:6789 --add node2 192.168.100.132:6789 --add node3 192.168.100.133:6789 --fsid 116d4de8-fd14-491f-811f-c1bdd8fac141
; x: u& w4 u. D5 g' `monmap [plain] view plain copy monmaptool --print monmap monmaptool: monmap file monmap epoch 0 fsid 116d4de8-fd14-491f-811f-c1bdd8fac141 last_changed 2017-08-16 05:45:37.851899 created 2017-08-16 05:45:37.851899 0: 192.168.100.131:6789/0 mon.node1 1: 192.168.100.132:6789/0 mon.node2 2: 192.168.100.133:6789/0 mon.node3六、分发配置文件
keyring和monmap
把第2、3和4步中生成的配置文件,keyring,monmap分发到各个机器。由于mycluster.mgr.nodeX.keyring暂时使用不到,先不分发它们(见第8节)。
cp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap /etc/ceph scp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap node2:/etc/ceph scp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap node3:/etc/ceph七、创建集群1、创建{mon data}目录mkdir /var/lib/ceph/mon/mycluster-mon.node1
, _4 G6 Z* z! f* v0 b1 M& r/ Issh node2 mkdir /var/lib/ceph/mon/mycluster-mon.node2
- u3 @+ u! D! I' H6 essh node3 mkdir /var/lib/ceph/mon/mycluster-mon.node3
注意,在配置文件mycluster.conf中,我们把{mon data}设置为/var/lib/ceph/mon/$cluster-$name,而不是默认的/var/lib/ceph/mon/$cluster-$id;
) w4 g  j  j( b. M: s$ Y. N$cluster-$name展开为mycluster-mon.node1(23);% W6 M; a8 K5 S( _2 f
默认的$cluster-$id展开为mycluster-node1(23);
2、初始化monitorceph-mon --cluster mycluster --mkfs -i node1 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring 9 Q  M& i5 \! A/ Z6 }) w7 S
ssh node2 ceph-mon --cluster mycluster --mkfs -i node2 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring
  f4 M, X0 s. W- q: ?ssh node3 ceph-mon --cluster mycluster --mkfs -i node3 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring& P9 i5 A" I- t5 K4 o! |9 W5 `+ K" w
注意,在配置文件mycluster.conf,我们把{mon data}设置为/var/lib/ceph/mon/$cluster-$name,展开为/var/lib/ceph/mon/mycluster-mon.node1(23)。ceph-mon会/ {1 v8 |: P1 P4 f& j3 b
根据–cluster mycluster找到配置文件mycluster.conf,并解析出{mon data},然后在那个目录下进行初始化。
3、touch donetouch /var/lib/ceph/mon/mycluster-mon.node1/done
; j5 I) `- ~5 [" a6 s' ]ssh node2 touch /var/lib/ceph/mon/mycluster-mon.node2/done
5 o. F- ]/ W: p- h; p# \6 S- Zssh node3 touch /var/lib/ceph/mon/mycluster-mon.node3/done4、启动monitorssystemctl start ceph-mon@node1" V3 Y! m0 F4 M' }" q9 Z4 k; L
ssh node2 systemctl start [url=mailto:ceph-mon@node2]ceph-mon@node2
" o  o7 ~7 a# z# i! y
ssh node3 systemctl start ceph-mon@node3[/url]
0 ^0 ~' Z( h) \5、检查机器状态ceph --cluster mycluster -s " O* ]1 H; T  Q
cluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs:八、添加osd
每台集群都有一个/dev/sdb,我们把它们作为osd。
1、删除它们的分区2、prepareceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdb ssh node2 ceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdb ssh node3 ceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 /dev/sdb 注意:prepare node3:/dev/sdb时,我们没有加选项:--bluestore --block.db /dev/sdb --block.wal /dev/sdb;后面我们会看它和其他两个有什么不同。3、activateceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring ssh node2 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring ssh node3 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring
注意:ceph-disk好像有两个问题:
  • 前面说过,它不使用自定义的{osd data},而强制使用默认值 /var/lib/ceph/osd/$cluster-$id
    ) L2 B& n/ G! j# W( P, [
  • 好像不能为一个磁盘指定osd id,而只能依赖它自动生成。虽然ceph-disk prepare有一个选项–osd-id,但是ceph-disk activate并不使用它而是自己生成。当不匹配时,会出现 如下错误:
    " S& e% z% ^; q- t% X1 T' b
# ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring command_with_stdin: Error EEXIST: entity osd.0 exists but key does not match mount_activate: Failed to activate '['ceph', '--cluster', 'mycluster', '--name', 'client.bootstrap-osd', '--keyring', '/etc/ceph/mycluster.client.bootstrap-osd.keyring', '-i', '-', 'osd', 'new', u'ca8aac6a-b442-4b07-8fa6-62ac93b7cd29']' failed with status code 17
从 ‘-i’, ‘-‘可以看出,它只能自动生成osd id;
4、检查osd
在ceph-disk prepare时,node1:/dev/sdb和node2:/dev/sdb一样,都有–bluestore –block.db /dev/sdb –block.wal选项;node3:/dev/sdb不同,没有加这些选项。我们看看有什么不同。
4.1 node1
mount | grep sdb /dev/sdb1 on /var/lib/ceph/osd/mycluster-0 type xfs (rw,noatime,seclabel,attr2,inode64,noquota) ls /var/lib/ceph/osd/mycluster-0/ activate.monmap block block.db_uuid block.wal bluefs fsid kv_backend mkfs_done systemd whoami active block.db block_uuid block.wal_uuid ceph_fsid keyring magic ready type ls -l /var/lib/ceph/osd/mycluster-0/block lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block -> /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 ls -l /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 -> ../../sdb2 blkid /dev/sdb2 /dev/sdb2: PARTLABEL="ceph block" PARTUU cat /var/lib/ceph/osd/mycluster-0/block_uuid a12dd642-b64c-4fef-b9e6-0b45cff40fa9 ls -l /var/lib/ceph/osd/mycluster-0/block.db lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block.db -> /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 ls -l /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 -> ../../sdb3 blkid /dev/sdb3 /dev/sdb3: PARTLABEL="ceph block.db" PARTUU cat /var/lib/ceph/osd/mycluster-0/block.db_uuid 1c107775-45e6-4b79-8a2f-1592f5cb03f2 ls -l /var/lib/ceph/osd/mycluster-0/block.wal lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block.wal -> /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f ls -l /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f -> ../../sdb4 blkid /dev/sdb4 /dev/sdb4: PARTLABEL="ceph block.wal" PARTUU cat /var/lib/ceph/osd/mycluster-0/block.wal_uuid 76055101-b892-4da9-b80a-c1920f24183f
可见,node1(node2)上,/dev/sdb被分为4个分区:
  • /dev/sdb1: metadata
  • /dev/sdb2:the main block device
  • /dev/sdb3: db
  • /dev/sdb4: wal. n, S$ a9 m  I: s( l
具体见:ceph-disk prepare –help
4.2 node3
mount | grep sdb /dev/sdb1 on /var/lib/ceph/osd/mycluster-2 type xfs (rw,noatime,seclabel,attr2,inode64,noquota) ls /var/lib/ceph/osd/mycluster-2 activate.monmap active block block_uuid bluefs ceph_fsid fsid keyring kv_backend magic mkfs_done ready systemd type whoami ls -l /var/lib/ceph/osd/mycluster-2/block lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:54 /var/lib/ceph/osd/mycluster-2/block -> /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb ls -l /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb lrwxrwxrwx. 1 root root 10 Aug 16 05:56 /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb -> ../../sdb2 blkid /dev/sdb2 /dev/sdb2: PARTLABEL="ceph block" PARTUU cat /var/lib/ceph/osd/mycluster-2/block_uuid 0a70b661-43f5-4562-83e0-cbe6bdbd31fb
可见,在node3上,/dev/sdb被分为2个分区:
  • /dev/sdb1:metadata
  • /dev/sdb2:the main block device;db和wal也在这个分区上。
    ; y1 j7 r% q0 p
具体见:ceph-disk prepare –help
5、检查集群状态ceph --cluster mycluster -s cluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum node1,node2,node3 mgr: no daemons active osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs:
由于没有添加mgr,集群处于WARN状态。
九、添加mgr1、创建{mgr data}目录mkdir /var/lib/ceph/mgr/mycluster-mgr.node1 ssh node2 mkdir /var/lib/ceph/mgr/mycluster-mgr.node2 ssh node3 mkdir /var/lib/ceph/mgr/mycluster-mgr.node3
注意,和{mon data}类似,在配置文件mycluster.conf中,我们把{mgr data}设置为/var/lib/ceph/mgr/$cluster-$name,而不是默认的/var/lib/ceph/mgr/$cluster-$id。
2、分发mgr的keyringcp mycluster.mgr.node1.keyring /var/lib/ceph/mgr/mycluster-mgr.node1/keyring scp mycluster.mgr.node2.keyring node2:/var/lib/ceph/mgr/mycluster-mgr.node2/keyring scp mycluster.mgr.node3.keyring node3:/var/lib/ceph/mgr/mycluster-mgr.node3/keyring3、启动mgrsystemctl start ceph-mgr@node1 ssh node2 systemctl start ceph-mgr@node2 ssh node3 systemctl start ceph-mgr@node34、检查集群状态ceph --cluster mycluster -s cluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active), standbys: node3, node2 osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 5158 MB used, 113 GB / 118 GB avail pgs:
可见,添加mgr之后,集群处于OK状态。
 楼主| 发表于 2022-7-20 13:41:38 | 显示全部楼层
部署Ceph mon服务/ G# g( r/ ~* {2 x0 ~; _) `7 @( t
安装Ceph-mon服务程序(所有设备执行)
# L& ?! Y# b! J8 B$ M& E6 Iyum install -y ceph-mon' f- E( r+ X. k% w& n# Y
1
6 I) q: o5 n( V5 |& y: e初始化Mon服务(Ceph 01执行)
( n) i" C( ]% D( q' K7 [生成uuid
2 ^- c2 ~; T% ?) T0 e) Nuuidgen
9 I9 d; m5 a4 i+ I2 N> 9bf24809-220b-4910-b384-c1f06ea807285 n; y# J  p0 M" l) ~7 J
11 u" A. M. D2 x
2
3 a! q, B  l. Z6 p  y: U/ g创建Ceph配置文件" @5 X, A( O4 E
cat >> /etc/ceph/ceph.conf <<EOF
" N0 Y: W& s2 _+ J" c1 v4 r+ u[global]0 ~" O6 ?$ R1 m0 M6 e
fsid = 9bf24809-220b-4910-b384-c1f06ea807285 D' t4 o- q. w, r
mon_initial_members = ceph01,ceph02,ceph034 s% a( y" t0 \! I
mon_host = 10.40.65.156,10.40.65.175,10.40.65.129- e, a7 L% S5 V  }* p9 [* @) ]
public_network = 10.40.65.0/24, f, z! ?. L& H3 ]: Y
auth_cluster_required = cephx
+ g# t, D% Q5 d& m) K% ]auth_service_required = cephx" q6 K( l' r/ w; v6 X
auth_client_required = cephx( Q- [# G: Q, d) w8 i
osd_journal_size = 1024
! Z1 ^2 z8 A/ uosd_pool_default_size = 3
2 w4 R) t8 {. Vosd_pool_default_min_size = 2
# P+ d6 R+ J0 |5 y# i6 Sosd_pool_default_pg_num = 64
! ?; b4 n9 M) ^osd_pool_default_pgp_num = 64+ Z1 f2 K5 ~0 r7 t
osd_crush_chooseleaf_type = 1) D" \# q( |; V) ]- T0 Y. }
EOF
0 n+ X6 @1 \" H9 a5 e9 G& o3 M1
6 R% o/ v% @8 J7 B/ K2# H$ v5 Y2 u6 D) ~* [+ I: G) c
3
" n8 W, {9 [4 s- [+ H' \4
6 }% v7 o. n9 V8 m1 S+ z) P+ x54 l7 E  e/ e1 J+ L  _2 ~9 Q2 P
6
6 q. Y- Z- H7 Y7
. b7 N( W3 l5 O2 x. ]6 I8
1 u* z0 h! D/ d9
! n& q5 s; c' N) g$ f& i105 J% y$ R0 I- P6 F
112 T% U# }' T& w( {3 b4 k% j
124 M$ p# T* G0 j1 t' |/ t6 h
13+ I+ ], O" l8 ]8 K# e
14$ i% T( f) }2 ^- g: j1 y5 Z
15
. j( U3 J3 g! e4 ~7 R, r16
+ [* U: X( n) |. t$ o# Z创建集群Monitor密钥。, j. J" }2 t$ i( J9 y
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
" r0 F8 E" n* a# ^8 z$ r+ K% [1. s3 ]( {2 f: Q* [4 W
创建client.admin用户、client.bootstrap-osd用户密钥,添加到集群密钥中。& R, B6 ^( f2 K2 r
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'3 w) B% r5 `: j, ~; O* R" m9 ?( ]# W+ c
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
. K! G, s7 ~# P/ a% e9 z4 X1- V, E  f& m1 u! [6 ^/ J" g
2. O- k4 M1 i# q8 t) z
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'  H  V2 i* N* x  j( C
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
, \; T; x0 x/ S& R. A3 h% X, e0 a, G1
8 z0 |* c7 t# j6 j9 r# ~4 s7 k2
# z. _5 s7 Y* T& l$ h8 A+ N! ^使用主机名、主机IP地址、FSID生成monitor map。
/ ]9 N. s7 U# ^8 q/ Hmonmaptool --create --add ceph01 10.40.65.156 --add ceph02 10.40.65.175 --add ceph03 10.40.65.129 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap
" p1 b5 b# g5 ~1, u% ]4 k+ G6 a! S0 i9 h# z& u
初始化并启动monitor服务5 O3 j8 J" ~4 X- Q
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph01
/ ?# O8 q6 t( @; ?: L! hchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
( Y- ]+ n8 Y0 V# qsudo -u ceph ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
4 r- i# T8 ]+ K# F. M% Gls /var/lib/ceph/mon/ceph-ceph01/
  v1 I$ U/ X; }. m7 v! {+ T1
6 ]: `+ {$ O6 i2
9 J0 m0 k6 G  D. B+ T; s# O! l3* @& u& C' \& B
41 P& J! x8 a5 Y+ P
systemctl start ceph-mon@ceph018 ~2 p4 n; w6 s5 v! y' U  }
systemctl enable ceph-mon@ceph01
  r& L1 g/ o1 h0 t1 asystemctl status ceph-mon@ceph01
  m3 ]1 N& Q9 Q: B- x; u& z5 V1; m0 n4 Y7 z9 P1 a0 X* o) U6 i1 h
2
5 {) U! V$ N, n3' ~9 U/ K! w( G* k, g9 g
同步配置文件、密钥、monmap到其他节点中(Ceph 01执行)- r7 j4 q/ W4 g4 [. s1 R+ t' d
复制ceph.client.admin.keyring、client.bootstrap-osd key、ceph.mon.keyring、monitor map、ceph.conf到另外2个节点3 A- p6 |  }+ i3 h) Q; k( t# L
scp /etc/ceph/ceph.client.admin.keyring root@ceph02:/etc/ceph/) `+ Z. c% ?/ s" p4 Y' N
scp /etc/ceph/ceph.client.admin.keyring root@ceph03:/etc/ceph/
+ c0 l9 ~3 w# B+ R( R1, Z) C8 |, R, [7 q+ w
2/ n4 {7 B  |2 ?+ ~% @" W" p
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/
; i2 e' K2 g0 P. ~9 Jscp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph03:/var/lib/ceph/bootstrap-osd/. N  c: T3 \) |$ k2 L
1
! n2 e& @; l/ W' c( P. N7 Q; P2( t8 X* l5 _; y+ S. L: O: ^" q
scp /tmp/ceph.mon.keyring root@ceph02:/tmp/4 N$ P; k. C/ M# H" B; @" f  ]
scp /tmp/ceph.mon.keyring root@ceph03:/tmp/
4 X: `3 L* {. ?2 ?  e) n+ M1
+ Y  I+ X( K/ i2! l8 V0 ?  y( _& \
scp /tmp/monmap root@ceph02:/tmp/5 ]6 e; _/ s* e! o7 Q) m6 H
scp /tmp/monmap root@ceph03:/tmp/8 ~; }8 `8 @% Y2 g  l0 i/ z
1
* u( H) p4 j" _) @3 U2 o0 w' ^2
, F0 z5 _- e- G/ E) H$ e9 `scp /etc/ceph/ceph.conf root@ceph02:/etc/ceph/
( @* \0 ^4 Q: Ascp /etc/ceph/ceph.conf root@ceph03:/etc/ceph/4 V" d* H+ z+ [4 F5 R
1$ t# |7 ^0 m6 h- T9 ^7 ?" c
20 i; q6 X6 C* \# ^, Y
启动其他节点的monitor服务(Ceph 02执行), B7 t0 f. @! O1 Z) h" n
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph02
1 |6 D+ S' K0 Z& V3 W( fchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap* H; i% Q. N! P3 m% w6 G0 J# t, Y
sudo -u ceph ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
/ k: z% c7 R( ]# U# K# W8 i% N2 Lls /var/lib/ceph/mon/ceph-ceph02/9 @3 A" C. c: F3 Q
14 h: L5 \; f; f2 B. [7 h( N% Y
24 D0 P) c# T0 k, V+ y  _2 }* K
3) T2 o- @4 e+ K1 H5 n% e- O6 _
4
( ~: j. F2 x# A3 `: ~systemctl start ceph-mon@ceph02+ I8 u9 E& J( I7 O8 n) p1 C' P
systemctl enable ceph-mon@ceph02
# Y9 L& w; p1 x. |systemctl status ceph-mon@ceph02
* u3 [$ q- d1 \# _& R1
7 t) q! B* Z5 Q2
( Y9 p" [5 i7 }: {1 ?1 U3
8 m+ z; _* t- @0 _: ?( _! w& F启动其他节点的monitor服务(Ceph 03执行)
% w7 @/ ^$ Z1 Zsudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph033 z2 r$ f1 A$ J9 u
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap# m1 g2 ^& l6 M' h" a4 X5 }
sudo -u ceph ceph-mon --mkfs -i ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
$ I: W* M- l* u, j1 Ols /var/lib/ceph/mon/ceph-ceph03/
* H, Z( G  d& M" p1 v' M1/ N5 C/ P# w0 X3 t, ^" p5 w
2
! p2 ~7 r0 o6 l1 k& H3
3 s8 f1 _& d; U$ ^; D, A4
, [3 Z4 {" }1 [/ ^- L9 C7 w- lsystemctl start ceph-mon@ceph035 I4 B9 {7 T5 k3 J; m
systemctl enable ceph-mon@ceph03$ z( v1 q* x, d) ^: G8 `& Y! s
systemctl status ceph-mon@ceph03
. E& e4 `" f: Y1. M& W, a3 j) j/ o2 s
2, v$ x$ B' c+ r: P/ w8 e
3
6 Y# n4 `+ i6 a查看当前集群状态(任意节点执行)
1 X3 a# P( a3 H通过ceph -s命令查询集群状态,可见集群services中3个mon服务已启动。( K7 Q9 p, b1 d9 K6 t$ ^
ceph -s& H0 c4 x* u. K$ [, ]# n% b* I
> cluster: / v8 u% j0 J6 y! Z; a1 L
>   id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a  * \9 z& E: t) J
>   health: HEALTH_OK% T8 l- w" T0 q3 x1 ]
> 0 V) W: [/ ^' W. g0 c  k/ r3 K
> services:  ; J. l+ |1 M. z8 o* o6 _
>   mon:  3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)  6 T) m5 A* S* x8 ~/ W
>   mgr:   
" ?" w' A( [* U>   osd:
+ l- n3 l/ ?+ D# u> % Y- b8 O  A& P. u* J7 Y# _) b
> data:
9 y% i) T5 u- E2 m' R>    pools:   3 q. E/ u4 f% N
>   objects: - {$ X, _+ K$ z4 S* B
>   usage:   % l6 C, z2 F, L9 d+ J
>   pgs:
/ h% n7 \& F1 n  u1 C9 p1
& r' Z' T: r. Y1 w1 N24 R% {1 y- H6 v) X4 D+ N) m
3
' I  F) p! f% {) ?4
4 O( s1 S2 ]( q/ y1 }( B# k5% h2 n0 J' g7 |. q1 S" S
6
+ @$ L0 w0 B& z+ ~  V; K7
0 X/ u# s% J! A* l# G" G- a: Y8; g1 \0 Z( _8 M! f
9
: W1 c( P9 X: L  d. ?10% {3 F/ ]: [7 ]3 O8 ?
110 `7 x5 z( z; k$ f" D- ~+ k) |7 D
12( ]% @; r0 _) P& T' A
13
5 m# r$ T, V  [$ ~! C3 v14
0 h; ^9 y) K4 E" @- b15
, o) e6 \% f9 P* [部署Ceph mon服务(ceph-volume 自动化创建): _+ `% Z( f# ^7 F6 x) s+ j
安装Ceph-osd服务程序(所有设备执行)
! S/ ]' U7 L2 M( X; _8 Oyum install -y ceph-osd
& v3 y: D: W1 i" `7 M9 y  S19 Y, V- X! B' F" q7 s/ a
初始化osd服务(所有设备执行)1 w0 W2 y, d0 j- G8 E
通过fdisk等工具查看磁盘盘符,然后利用ceph-volume工具自动化创建osd服务。
: _  q/ U, q& w0 F; U# d% uceph-volume lvm create --data /dev/sda5 s* p8 q- _* {0 D. ^& `
ceph-volume lvm create --data /dev/sdb6 s8 g) T! R" O# C
ceph-volume lvm create --data /dev/sdc9 K! q6 \4 d1 b% H; [; [: `
1$ n& J+ B0 y9 ^! e6 ^2 L
2( a# v- U4 _/ M. j- ^1 ~7 k
3
" Z& n  u  I& S* O1 O+ ~查看当前集群状态(任意节点执行)% L+ T7 e+ q& S. r6 H. f4 G2 i6 M
通过ceph osd tree命令查询集群状态,可见集群services中所有osd服务已启动。
! @: H6 d) m; Z9 B) Pceph osd tree( F6 l- v2 E: p* o$ l
> ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
& f& [$ n2 e& h3 T> -1       16.36908 root default: l5 I8 w  v* Z* J8 l1 o; t
> -3        5.45636     host ceph01; w6 }  Y$ S: a0 m. p
>  0   hdd  1.81879         osd.0       up  1.00000 1.00000
+ E; r  a( R+ K. ]- ?>  1   hdd  1.81879         osd.1       up  1.00000 1.00000
9 t2 p% |( q2 H( V>  2   hdd  1.81879         osd.2       up  1.00000 1.00000
; Y9 X8 K" Y' x' w+ z; f: V8 h" H> -5        5.45636     host ceph02$ ~$ b2 m  u9 z/ {* n4 u6 U
>  3   hdd  1.81879         osd.3       up  1.00000 1.00000; F4 U+ T% {  y1 {/ ~4 _& X
>  4   hdd  1.81879         osd.4       up  1.00000 1.000000 |. E7 M$ ]2 z# a1 t9 R% [2 w
>  5   hdd  1.81879         osd.5       up  1.00000 1.000000 ?# X. e4 |6 G1 `% e1 l) T* f# |
> -7        5.45636     host ceph03
. [+ `2 p; J2 w$ Y>  6   hdd  1.81879         osd.6       up  1.00000 1.00000$ C4 |& X3 ?8 b+ c2 I. V8 T0 N6 O
>  7   hdd  1.81879         osd.7       up  1.00000 1.00000# N3 c; B; ^3 O' e
>  8   hdd  1.81879         osd.8       up  1.00000 1.00000+ t7 n% s$ ?8 v, m, `6 l% I
10 c) I# A" [/ M: B0 z8 \
25 X' v& [8 u' w5 w
3
) v' e) I$ Z& ^$ _1 q4
% I9 E/ n5 M) U; A5- L+ b" j1 ^( S, k% L, G6 ]% C
6
8 d- f$ K) Y; h( d& K7
2 W, s' P* Q7 z8  K# o6 t* Q4 x
9: _! s- f7 [, o* i6 @9 {2 [
109 @- L) T4 i! G, q
112 [, k0 |* A4 p1 v) Y7 b" W
12: c' a2 n' R( Y( q+ s' l# f: Q
132 d5 C4 {+ a1 \6 [/ q  m- J
14) D2 P9 ~  v# i  d3 A2 @( F
15
3 K7 V2 M) ^$ `+ R& H部署Ceph mgr服务并开启Dashboard9 k$ u# a6 O! G) l9 B6 m
安装Ceph-mgr服务程序(所有设备执行)% c- f5 }! E0 ]' p
yum install -y ceph-mgr
% a  B6 h& g9 L/ d0 ]1) L9 ?  N4 K) V( ~
初始化并启动主MGR服务(Ceph01执行)# j) `( X; a- z' R
mkdir -p /var/lib/ceph/mgr/ceph-ceph01  x0 A. S# W8 [5 ]  b
chown ceph.ceph -R /var/lib/ceph
! X$ @$ m8 A, N! Xceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph01.keyring --gen-key -n mgr.ceph01 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'6 _  Z$ G$ b- w9 h
ceph auth import -i /etc/ceph/ceph.mgr.ceph01.keyring' o+ v5 A) }4 U/ J0 o
ceph auth get-or-create mgr.ceph01 -o /var/lib/ceph/mgr/ceph-ceph01/keyring
5 `1 y' j; j; s$ t& B( Z$ j1
; s3 ~, k( }& c" ^* ]2
# ~' }8 N5 E* D) ]; g+ [+ J3
, `) z5 b) B! b4 F2 `2 {% z1 ?4
5 s$ M; o9 Y4 D( d) S5- U/ s6 y; o& i+ ~9 ?1 e. p  V
systemctl start ceph-mgr@ceph01/ Z' U+ O; O0 K2 L' |
systemctl enable ceph-mgr@ceph01  i3 k& O$ f3 Y) a
systemctl status ceph-mgr@ceph01
$ S6 k# Z0 {8 H+ J1
( t7 _, Y' E: ^8 m1 Z3 U$ ]2
6 W1 O& g% X/ q34 A6 a  E: z. `- u
初始化并启动从MGR服务(Ceph02执行)& B- X: k( f7 b2 f# u. `: P6 c! d
mkdir -p /var/lib/ceph/mgr/ceph-ceph02$ l- B. I. K9 u6 U0 M0 K) M
chown ceph.ceph -R /var/lib/ceph
6 s1 ~1 m7 j+ a& |: \+ G0 X4 Hceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph02.keyring --gen-key -n mgr.ceph02 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'8 r/ B  N, Y# k. p8 ?
ceph auth import -i /etc/ceph/ceph.mgr.ceph02.keyring
. Q  s1 B" E' G5 Y& C; bceph auth get-or-create mgr.ceph02 -o /var/lib/ceph/mgr/ceph-ceph02/keyring
$ c5 F* M% j# {& e4 D7 M1' y3 Z! E: }8 @8 _+ U! [
24 e* F) }/ D) y' y3 S) A% v; d
3
' K# H9 \) ~- `- r! B) T4' V% y# s: S  I; ?
51 N* \4 V4 \5 `3 d! t; M
systemctl start ceph-mgr@ceph02' H$ W5 |  x' |, ^8 {  g
systemctl enable ceph-mgr@ceph02" c8 E, x7 M- t: F2 a. l+ ^
systemctl status ceph-mgr@ceph02
3 H1 T& m6 Z3 R' ?: W. h. {1
/ w& Y8 ?5 V0 N9 h+ B2
% A! M) {% N! t# j+ a) K3) A+ o* U8 |5 E) M2 Z5 }
初始化并启动从MGR服务(Ceph03执行). f1 u" ]( |, M% |
mkdir -p /var/lib/ceph/mgr/ceph-ceph03# j2 ?1 P7 j$ n# W2 T
chown ceph.ceph -R /var/lib/ceph. R' ?0 p5 x2 W/ T- Q4 q' y
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph03.keyring --gen-key -n mgr.ceph03 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
0 q3 E0 X: W' |. yceph auth import -i /etc/ceph/ceph.mgr.ceph03.keyring
+ m$ i, b( Y: v8 nceph auth get-or-create mgr.ceph03 -o /var/lib/ceph/mgr/ceph-ceph03/keyring" V+ y  e" m, @3 Z$ Y$ a
17 r% F. @6 G8 z4 o; h# [- x/ ~
2
& E/ D7 f9 D. [7 F/ z* C$ z3 T3
: \$ l( a' c1 e5 U* m; W/ ?, ~1 v4# f5 ^9 ]) M2 H, v6 Q
5+ z8 j4 A; N( R
systemctl start ceph-mgr@ceph034 Q! P. Y# s; c* C7 Q8 G
systemctl enable ceph-mgr@ceph03( U0 [. h: L" s2 |' V. y. ]
systemctl status ceph-mgr@ceph03
; l% _8 K' f( g1
/ ~! K' u( m+ I, a2. Q1 }' o/ p9 ~" r7 [  n( ^% \) C
3$ M( i" c$ N* o9 s: y
查看当前集群状态(任意节点执行)
, P6 B" ^2 q# E* \6 Z+ x通过ceph -s命令查询集群状态,可见集群services中3个mgr服务已启动。) A# C1 r0 i# A7 j( W0 I+ b
ceph -s
' J# ^: d! E% ^0 t+ R3 {2 q; e: _> cluster: ; q. p, \# ^: g8 G
>   id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a  ' A; n, X* Q* c+ a+ e4 U/ H- `
>   health: HEALTH_OK/ y  Q, t* n2 ]  v3 C, R
>
1 X8 Z: F" f3 M! {! a: N% [* {> services:  
3 D5 t5 u2 f( C# U% C! @>   mon:  3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)  # C3 e% s) g5 ^1 C
>   mgr: ceph01(active, since 3w), standbys: ceph01、ceph02   
# j, c7 y2 e! T( v4 ~- v3 P>   osd:
: K8 g5 ~$ p# g" V) p0 H>
. P- m- X' s/ ?> data:7 ~4 {1 r! |* q- V" w4 |2 }
>    pools:   ( U9 l9 _6 f5 J) q+ K9 V5 u
>   objects: / I" Y. v3 @7 K( u% O' e
>   usage:   
+ b* |) F, U' U* ?3 |% S, I, y  q>   pgs:
) o+ }1 l$ z+ E1
5 O; G8 _# d6 ^/ p" }2; z- f5 u. e+ c7 S$ i, H
3
* x+ w, R( Z5 L8 E4
! |$ H: c, G1 W8 H) g5( r% X- Z( b- F8 h
6
, @* L& c: w5 l6 i) K) p' P7' e7 H; r" X5 \+ ]5 [  E+ O
8
4 q% N9 r3 J/ t: a% G- J( H9
; y/ H6 G$ Y4 ]$ E$ i10
* p5 t( v: y5 K5 q11
, L: s, h( I7 r4 w& H; U8 F12
% }  J( X! R2 U; i! t137 c+ O4 I, B+ _! V' J6 I# O& M) }
14
% e# z  {6 `5 s! I6 ^7 {15
: ^* z% ?2 c3 U/ W7 Y! U使能Dashboard访问功能(任意节点执行)
% a8 I& ]1 W1 _$ s+ \开启mgr dashboard功能' B( R- }: H- h. ~9 {8 N
ceph mgr module enable dashboard
0 ]3 A, r3 C; P( k8 t5 e/ r2 [1' \# u: s7 N! p- i  m2 d! A/ V
生成并安装自签名的证书% z7 g# F4 l) f! f! Z& F3 q
ceph dashboard create-self-signed-cert
# f$ G0 J* p7 b5 P* }* f+ _; _1
& Y/ _* D4 \. h2 f  W/ H配置dashboard
0 _& A5 Z% L4 U6 [$ A2 B0 _$ M/ Iceph config set mgr mgr/dashboard/server_addr 10.40.65.148
5 d8 q3 y! {0 D+ F; m! [' yceph config set mgr mgr/dashboard/server_port 8080
" N# I& x' C+ B/ ^ceph config set mgr mgr/dashboard/ssl_server_port 8443
6 B8 s! R9 B- K; N# L. u1
" c: T' e2 n& n$ p2
, |3 q" P/ f  v1 n34 `0 W+ }! a/ u/ o6 P$ }5 e
创建一个dashboard登录用户名密码3 F1 h- L( A- }
echo '123456' > password.txt0 o1 V$ q! T  @; W
ceph dashboard ac-user-create admin  administrator -i password.txt
$ i# ~8 s! y: O3 x) m, F1
! c. \# Y; b# `  v2- N3 g/ K# C- ^) x! `0 u8 d
查看服务访问方式! `$ N8 p0 U+ [, l' X
ceph mgr services
; a7 N8 h5 f% N8 k% d, L1
% q+ U4 E9 \# H3 j* `# v" ~% }通过web访问Ceph Dashboard,用户名密码为admin/123456$ V4 @2 k3 w) Q
https://10.40.65.148:8443/ N: D$ @% ]6 k6 R( u- ~, N! F

- A1 g1 q% A( k, k0 R' M
 楼主| 发表于 2022-7-20 13:47:56 | 显示全部楼层
部署Ceph mon服务
, A3 L5 F2 v0 ~0 E( S9 f, D& C* K安装Ceph-mon服务程序(所有设备执行)3 `3 K7 q1 S4 m9 \6 J3 s
# ?' O6 X- _# H# q
yum install -y ceph-mon
% z: \( M1 a7 u0 h+ j# H/ o1
* \$ e  x* I" e) f- S: Y. |初始化Mon服务(Ceph 01执行)
) R( x5 D- R' j" R& l
/ J2 b! V# c( I% p* W& E8 i  H生成uuid
$ G$ H9 _1 y: x
2 x& g5 k- l. L* I/ }  quuidgen" H' q& B8 d  L
> 9bf24809-220b-4910-b384-c1f06ea80728
: w" j7 R" y% _* l9 f1
$ L' {* P: e- M3 j2) f; Y+ W% [& E/ @: B
创建Ceph配置文件
+ u; [# g" A9 c! u; Z6 b9 f  ?: G7 z# j. Y& W% M) {
cat >> /etc/ceph/ceph.conf <<EOF/ y2 g2 `; O2 r( d, G5 G  n2 O( x
[global]
8 t' ^1 H; q6 Q% ~9 Afsid = 9bf24809-220b-4910-b384-c1f06ea80728
: F. D9 V" G8 Z4 O9 R% jmon_initial_members = ceph01,ceph02,ceph03' w. ]4 S! [2 b
mon_host = 10.40.65.156,10.40.65.175,10.40.65.1296 L5 v3 L$ z+ l' V
public_network = 10.40.65.0/24
5 B1 Z. }$ X2 ~) m& cauth_cluster_required = cephx
7 j7 `& ^+ ]4 Y7 B; }auth_service_required = cephx% ~5 j0 }' G5 H8 O/ T
auth_client_required = cephx
4 f9 B$ R; o6 A: j; t# Iosd_journal_size = 1024
8 d; o/ x7 r  d/ y: W9 rosd_pool_default_size = 3
: M" w4 |) \. t9 H, |* d; p% dosd_pool_default_min_size = 2
0 D) k1 _5 s$ x4 n' {osd_pool_default_pg_num = 642 @# T4 y( N) A! c
osd_pool_default_pgp_num = 64
" y: @* K5 L/ t, F4 ~osd_crush_chooseleaf_type = 1( J9 [7 S  E- ]( O8 b, L
EOF
6 P1 |" L+ u5 p0 A7 D& j1
% Y! R3 c$ ]. y5 j! L5 k29 A9 N3 I+ e4 U7 C5 W) j
3& \, [9 n% a1 R) O0 ?, X: g6 l
4
% g% F; J9 C" K  E# ~5% L2 t- H" q6 w' t& D6 @
6
, P0 k" p. }$ O5 f7
/ L; B& X; O6 M$ s5 g9 D. M# U# j8
$ D+ m- v" x5 K% W9$ X+ X- B9 D6 P% m
10
" X' E7 ]  R4 _& j8 s+ w$ Q11
$ ^. p7 u3 U( `7 C129 M2 y1 f" g, E" b) b+ r/ n
13, |* ^* C7 M! d  m  L, K( L0 {
14
+ G1 U3 a' U" }15
; B9 t' Q! r9 g* Z# C+ W3 Q. M16
; H) C2 h6 x3 A2 a1 t创建集群Monitor密钥。
8 k; U/ Z; I# @3 {7 M# g5 m4 k$ Y, e: o2 E4 j9 l% ?
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
2 ^0 U% B$ S) L( x& t: N$ S1 z4 C: J10 b( A2 ^) V$ }
创建client.admin用户、client.bootstrap-osd用户密钥,添加到集群密钥中。
, D% {1 l4 S& r. T* A* ~! b' Z9 Y( A  s
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
) |$ D# \7 n6 ^0 a$ p) cceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
6 k2 {! t! R4 z) g3 i& }% t; O1
* f1 O8 l9 R) b$ g2+ Z; \; t) ^4 S+ y7 u' j! P4 J4 `; G
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'& P7 `' d! z+ `, u
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
: B, D7 L. ?6 L5 U' ]1
+ w9 K, f7 Z" k9 y8 q2
! ]& \& @% h% O- l使用主机名、主机IP地址、FSID生成monitor map。! `. _: S; ?& W9 g  M" A6 u
! C) m) U8 ?' h* O8 S' |# |
monmaptool --create --add ceph01 10.40.65.156 --add ceph02 10.40.65.175 --add ceph03 10.40.65.129 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap
) s4 d& u* T$ u. V, m3 J' K! Z- H1 {1
$ f7 y  {* r2 {9 t初始化并启动monitor服务
" m8 k. |3 x$ G3 c) P" s: V  I. W6 X# W4 \6 u% W6 ^0 q$ _
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph01* m, `- @, z  M2 T: i8 j
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap7 h! F: x4 t/ @5 i4 J. z
sudo -u ceph ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring6 v4 j$ l2 Z, q
ls /var/lib/ceph/mon/ceph-ceph01/
! c2 b5 u; D9 h/ H$ X! b) `1  r/ V' |- Z: d! n
2
6 @# z$ G4 J) V/ ]' w( S3
3 c% z6 G4 }! n$ @& ?4
! b9 i0 ~: K: E, Tsystemctl start ceph-mon@ceph01
# _. H% z# g, C1 }+ l/ @systemctl enable ceph-mon@ceph01
  u& j: r/ M; Y& r8 osystemctl status ceph-mon@ceph01, p: w' G! j" S# I# m4 M7 v
1$ O2 p, U  Q* {) t" V& n
2
, b3 v0 c& W% t  n! {3
: a- O0 k' W" @, ^$ s/ N# D/ A2 W1 p同步配置文件、密钥、monmap到其他节点中(Ceph 01执行)$ ~9 m) \2 t+ ?' v& p# T# E" `  Z9 }

" k0 z2 N8 @; U; ~! q  Z# q; Q" G复制ceph.client.admin.keyring、client.bootstrap-osd key、ceph.mon.keyring、monitor map、ceph.conf到另外2个节点
6 ~( r/ e  P( z
: k, H: ?) X7 x5 ~! ~$ cscp /etc/ceph/ceph.client.admin.keyring root@ceph02:/etc/ceph/8 z2 ]7 R2 l9 E3 {; m, j! T
scp /etc/ceph/ceph.client.admin.keyring root@ceph03:/etc/ceph/0 o6 e3 N8 t- O) o7 M; P' X# F- b
1
# X! g0 _4 W6 c  F  |21 h8 l% c! t# ?, c, {# t9 x
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/( ~+ K. P6 F# O8 }6 H: p6 k
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph03:/var/lib/ceph/bootstrap-osd/7 V8 e$ B. k! Z' M" Y+ Z/ \1 B2 K
1
0 \3 F& \- E. u' W+ l+ [- g23 |% N* o: W" `  E- Z
scp /tmp/ceph.mon.keyring root@ceph02:/tmp/
3 k4 X- H! B) I. U6 `, yscp /tmp/ceph.mon.keyring root@ceph03:/tmp/
! @! W; q. n7 t18 v9 j, V& ?6 Y3 C: C
2
5 o* e" ], I( B% s6 o) A% ^: c: _' _6 iscp /tmp/monmap root@ceph02:/tmp/
& U: j1 ~! b" n0 a; jscp /tmp/monmap root@ceph03:/tmp/
5 P& j% t8 r5 Q% q; @# g1
" J: m7 z! z; ^5 [$ _" u21 Y+ d8 V; {& p
scp /etc/ceph/ceph.conf root@ceph02:/etc/ceph/% d) A: R8 @. r) b# |
scp /etc/ceph/ceph.conf root@ceph03:/etc/ceph/
, f0 b; Q3 G. y7 W9 c6 K/ p1
  q) Z- X# M* y  A- B2
8 v0 j4 M' f/ H4 ~8 U3 K启动其他节点的monitor服务(Ceph 02执行)
* a7 m$ N  g8 P1 `! _0 L0 K: B( a0 G& i& D& y3 |# {% A
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph02
! I% L2 p+ Z8 m! ]/ I% bchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
0 l+ s" c5 N+ nsudo -u ceph ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring2 h; Z$ x. \3 Z1 Z$ x
ls /var/lib/ceph/mon/ceph-ceph02/, q" H8 l3 M) B- F* w9 B2 f$ U
10 b5 S3 D8 @( }! g1 F3 C& R/ f9 B
20 h8 g; B! f, d5 t8 U5 X6 A1 U
33 z7 o+ @' ?  w6 i
4
% c! M( n' {9 ^! k6 P! y% ~4 Zsystemctl start ceph-mon@ceph02
& `  ]  t) n$ r& P( ^5 z& @8 msystemctl enable ceph-mon@ceph02
! l9 f) y" x0 n( o' {$ Esystemctl status ceph-mon@ceph02, p2 g$ T7 z1 S
1/ Z9 U( Q4 B0 J* R
2
# r) ?1 W' U7 {3
& _( C( b2 M" T- o1 ^启动其他节点的monitor服务(Ceph 03执行)  ]' p- W( T( |. u

1 J* C1 F( x$ u+ Q+ l; R( n' Z9 \sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph034 C9 v( N9 h8 d* Z0 h4 i
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap! Y0 I2 Y/ |" h/ Q' T/ \
sudo -u ceph ceph-mon --mkfs -i ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring+ o' {( i; a5 f
ls /var/lib/ceph/mon/ceph-ceph03/
+ j' K1 y1 o# |, u( \1
$ ?# ?  c% c4 O3 `) w$ w2
' f) T+ u/ F3 Z' v9 p6 @+ l30 @/ [$ Q8 K* Y7 t/ b" I8 y  d
4! C& ~% {) S$ E- x# _  F
systemctl start ceph-mon@ceph039 X. M/ _" Z# p- p
systemctl enable ceph-mon@ceph03
$ O* j% R2 X8 E% O3 [$ Hsystemctl status ceph-mon@ceph03
( D: }2 I+ S& ^1
1 X/ M$ L% E1 ^) f2
6 w4 F, g+ X$ `6 _# a0 |3
9 ~" _7 C& ]% d查看当前集群状态(任意节点执行), d9 X$ E  m( A
通过ceph -s命令查询集群状态,可见集群services中3个mon服务已启动。5 Z# B" O& `+ n$ n4 I
* m' U8 L$ L, C5 C( \% C; i
ceph -s! t# \1 P* A$ v4 q9 i; E: @( G. S+ y
> cluster:
% `' O( B& z$ m>                 id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a         & G% \4 G. w) G( p+ _* T
>                 health: HEALTH_OK' s0 f6 u/ L( ^& w$ p
> 2 Z% Y3 J- W3 |: j
> services:         7 P6 l8 M' T4 B6 Z
>                 mon:         3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)        
/ G" v& B* C+ B, q6 ]) i>                 mgr:                                / r6 K! M5 M" j5 Y) i9 _4 Z2 t) h$ |
>                 osd:
! t/ j% D; w  x$ N> * K9 q" q  Y% j5 a4 ~7 Y/ |- B
> data:
! C4 H* v1 X- X# W3 M( u>                  pools:   
8 q$ E% [$ @5 H>                 objects:
* J& N$ a& i- ]; R>                 usage:   5 \% J# D, z. v: x6 U$ Q) Y
>                 pgs:, k, x& I5 t9 k
1- d/ X  X9 k8 e, W  A
2
, D  V9 f: W! i% @. Y, g9 d3
: H+ p  u4 E% d2 R45 U8 |: M4 C6 k+ x! ]/ T% _( u, f
5
% r7 n! V2 N3 d2 e64 t: T( ?/ p- y) o% d) x, u
7
$ k+ \0 M$ V/ a: q8
" @- M6 z- }- G$ |0 t; _# m9
5 i' A2 E. b+ H8 v1 @, H& [9 Y106 h, h- q4 j  _
11
! d& J, i2 t0 [/ o. s12
7 Y0 H* }. X: u8 c) Q/ W2 I( d13
- z: j: \1 @. L$ Q: G# H  F140 `7 h, b" o' E0 b+ K- c
151 x" t+ x/ }  R2 f
部署Ceph mon服务(ceph-volume 自动化创建)
8 j( R! h6 Q' C0 Y: g, U/ v0 s安装Ceph-osd服务程序(所有设备执行)
/ m, J/ x" I. C/ Qyum install -y ceph-osd5 k, o3 b) p" B: @
1, j7 Y3 J7 |& o4 C9 }
初始化osd服务(所有设备执行)
1 j/ I& F0 P5 |* |; M- a! {通过fdisk等工具查看磁盘盘符,然后利用ceph-volume工具自动化创建osd服务。
  i: d8 s8 j0 F7 t! mceph-volume lvm create --data /dev/sda
) p/ Q' v+ g$ H. u+ eceph-volume lvm create --data /dev/sdb: q/ {, p* ?( G* A) x& U
ceph-volume lvm create --data /dev/sdc( i; Z( K: t, V, e8 i& _
14 b8 o2 t, |4 T( Z9 [$ c* ~
2) @% t8 a, K5 B$ V( L! ~  W( {
3
/ ~  V7 |0 C% q# C5 P  h" Q查看当前集群状态(任意节点执行)
' w, N, t- Y$ J6 r5 R( d通过ceph osd tree命令查询集群状态,可见集群services中所有osd服务已启动。
7 _2 \: h" J  K0 p% R' O9 dceph osd tree
1 |5 D- d7 I" F> ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
5 V2 W; X+ o2 X7 `> -1       16.36908 root default2 y1 R/ T; n1 ^" ^5 {0 g7 `+ H
> -3        5.45636     host ceph01
7 B* ]' e$ P$ }6 E>  0   hdd  1.81879         osd.0       up  1.00000 1.00000
; |. y& [/ `' U5 `7 o8 B0 p- F>  1   hdd  1.81879         osd.1       up  1.00000 1.00000; |& b8 z) X0 K  F& c3 n
>  2   hdd  1.81879         osd.2       up  1.00000 1.00000
, M0 u+ v* ]2 y> -5        5.45636     host ceph02
8 m& x4 D+ B9 d" M>  3   hdd  1.81879         osd.3       up  1.00000 1.000001 G& K9 R  g  B3 c: M+ f1 r
>  4   hdd  1.81879         osd.4       up  1.00000 1.00000( F' h4 t* Z9 I! p
>  5   hdd  1.81879         osd.5       up  1.00000 1.00000
2 K" A/ E# ?  l  f3 l* V> -7        5.45636     host ceph03! K% l& A3 f  I+ Z
>  6   hdd  1.81879         osd.6       up  1.00000 1.000002 q) @9 r& g; e/ @% }1 C! U+ H3 ]1 Z# [
>  7   hdd  1.81879         osd.7       up  1.00000 1.000003 F& i8 ^- |; i
>  8   hdd  1.81879         osd.8       up  1.00000 1.00000
* j  H! q& U5 n% s1/ L0 V" p/ H2 z; R. S2 C' _6 u
2
+ |8 K6 P% U5 Y- Q: `3 e3
! j; K( Q7 [. [. Z0 p  P4
6 [  \2 \; l5 K, k& t# R55 c% {: ?  @: i/ H5 K& M
6
" b0 F# w% I: x1 V! W5 n' u7 h% p7
2 h* C  i8 k! w8 d3 W6 ?- e8( D. i( Y6 f% y- F
9
: n% e/ Y' q7 m1 E0 q5 s: I10" N3 F  s) x! M6 l0 \. V# c
11
" ]* S+ W9 K; O9 L" c, n, g12
; C" r4 Z. v$ d8 b5 e9 q8 w13
. a. b7 _- ~, g5 K  p14
0 q0 j; q  d" x. \, N! z15
, d# ]6 K% j* A: k: C9 r# E' B部署Ceph mgr服务并开启Dashboard
# P2 k* C" D3 H- L% U安装Ceph-mgr服务程序(所有设备执行)4 w% G' q9 s- q
yum install -y ceph-mgr- D- s% n" u7 o, H5 R! c* k0 G
1! Z& u, |; I3 j0 \1 `, \
初始化并启动主MGR服务(Ceph01执行)! j* j5 m, v% F4 [
mkdir -p /var/lib/ceph/mgr/ceph-ceph01/ ], X4 n% y: N+ z3 A9 J
chown ceph.ceph -R /var/lib/ceph% o0 Q$ Z- O6 T6 j- a
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph01.keyring --gen-key -n mgr.ceph01 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
# p6 w, [4 [9 }& Z. q5 G6 @' Cceph auth import -i /etc/ceph/ceph.mgr.ceph01.keyring
$ T9 ^6 W5 A' u' Iceph auth get-or-create mgr.ceph01 -o /var/lib/ceph/mgr/ceph-ceph01/keyring
" \% D9 _4 H3 M+ @8 _+ x1: E/ u; ?% K1 ~% {. X" h
2
8 v0 M7 U/ D, B3
; O% s$ S' G2 A7 B" w# P44 }% r# U: s- z$ {3 ?3 G" `0 z
5' h: ]  S" ^9 v3 y
systemctl start ceph-mgr@ceph01
: ^; N, ?2 A, g9 _: g) B; z" M4 G/ isystemctl enable ceph-mgr@ceph01
8 O. E) L+ i2 Y1 Jsystemctl status ceph-mgr@ceph01
4 K  r4 G# D: H- ~" p15 c" I7 [6 \+ {& m0 B* j
2
6 G2 b2 p( ?2 i9 q! p! `3, w  H/ S) l' u& f
初始化并启动从MGR服务(Ceph02执行)/ ~$ `6 H; }! h' h
mkdir -p /var/lib/ceph/mgr/ceph-ceph023 x; N! D4 v/ I: Y. @* U" g
chown ceph.ceph -R /var/lib/ceph
' u8 r1 O7 |; ^% c4 H) Kceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph02.keyring --gen-key -n mgr.ceph02 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'6 ~) ?% J0 F) r) J3 A3 w1 m/ j
ceph auth import -i /etc/ceph/ceph.mgr.ceph02.keyring  O- i. k2 ]: o$ e, O5 ]
ceph auth get-or-create mgr.ceph02 -o /var/lib/ceph/mgr/ceph-ceph02/keyring5 o  P. U% l" v$ Q3 f
1& G+ p" ~8 ^5 [! B5 g5 P3 r% O8 Q
2
9 X2 B  n% A' f: j6 i# ?# d6 @3
: z' f6 \# |$ n5 j5 ~5 N4
- C. @- j, f1 A1 p% p$ S5
" Z2 e: s( ?' m( ^+ q( l, Fsystemctl start ceph-mgr@ceph02
+ |+ N/ _; `9 u' k- z6 P/ L& msystemctl enable ceph-mgr@ceph02
! T' C+ P( u( O: ]* e6 p2 ~systemctl status ceph-mgr@ceph02
0 G: ]3 H* f; D+ y/ K7 P1% w+ \5 X' H' I8 m% l7 p
24 l2 {) i' {6 x* T* M8 {' ]' y
3( D  n; m7 f8 ^7 X6 F
初始化并启动从MGR服务(Ceph03执行)
5 N- G6 ?3 a- B' i- Y* W; Amkdir -p /var/lib/ceph/mgr/ceph-ceph03
3 Y2 w) o9 ~+ I9 hchown ceph.ceph -R /var/lib/ceph
! `" U% c6 t" |3 uceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph03.keyring --gen-key -n mgr.ceph03 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'+ d" r3 `" L' r/ `) S$ Y
ceph auth import -i /etc/ceph/ceph.mgr.ceph03.keyring
. T$ a0 I$ q8 F! \' t' Qceph auth get-or-create mgr.ceph03 -o /var/lib/ceph/mgr/ceph-ceph03/keyring
  x( j; K7 x, e1& s$ u) w5 s2 \& g( t' z  `
2
  X. X8 f6 k* _2 W3
$ Y9 u$ D1 t& z+ [4& b$ l# _; P, \
5- k. ~2 M8 b8 H
systemctl start ceph-mgr@ceph03
7 h/ u8 w4 q) V$ b9 ^% Xsystemctl enable ceph-mgr@ceph03, H. @% s* d% f
systemctl status ceph-mgr@ceph03) `) p. x9 o/ P" X( ^5 {1 [1 s, t
1
) ~/ P3 @1 k7 X" {2 x, C22 z  w4 i/ `0 Y* P3 r! c
3/ e+ G5 j8 I8 z* _' O' N5 ~
查看当前集群状态(任意节点执行)4 B7 C3 ~! O, o5 V
通过ceph -s命令查询集群状态,可见集群services中3个mgr服务已启动。3 o- \$ `6 j& J* Z' z8 |
ceph -s
. Q/ [0 c8 ]; P> cluster: 5 ~( _* C; d5 L" D
>                 id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a        
8 W/ j* F" v9 I4 ~2 N>                 health: HEALTH_OK( T5 z/ u; T& \
>
3 z! T$ Q2 o2 r) \> services:         ; P& N" {( J8 E0 c! X
>                 mon:         3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)        
) i4 ]; `2 b# d>                 mgr:        ceph01(active, since 3w), standbys: ceph01、ceph02                        $ i, R4 g: O" W* ?+ t/ S5 I
>                 osd: 0 j2 j/ U: w& k0 ]1 K; M8 N) e
> 1 z3 I3 y8 f8 q) I
> data:
  @7 n7 c& m( M. C>                  pools:   $ |  ^3 o& u" g. h3 ?0 a7 l+ T
>                 objects: 1 c6 u! C: h" f# {
>                 usage:   
+ Z2 g3 _4 Q/ ^. B5 Z* L3 t>                 pgs:7 w4 e* B2 G. l
1
4 {9 `1 I$ g. f: \. D24 J# L$ I/ }# a0 I
3
( E/ L: n. l5 `" m3 b4( O% d; |, j! d1 R8 W
5) R+ D5 o( s( D
6
, h# [2 O- k9 F+ R- s76 ]. I8 U& j/ g% s8 g
82 V, d, |6 A' I+ a; T
9  g$ [! I$ q8 z+ s) I0 q
10
( f9 ]) w/ H/ P( k8 Y111 q  F  t# [+ V6 Q4 P" J
12' n& f1 x* Z; E- K
13; T5 p. }  M& M5 ~1 z7 E6 K+ ~* |
141 v# ?2 k# W' i% q3 j
15
9 {4 N& ^; x2 H7 p使能Dashboard访问功能(任意节点执行)- f0 \1 I  j+ D
开启mgr dashboard功能$ U% B) t$ z5 K( E
ceph mgr module enable dashboard
# R/ F1 T) g. ?, s7 A" y9 @1+ S5 M8 x  o. v# n# R; S% }2 [
生成并安装自签名的证书
; R+ v& E; n0 ]2 Pceph dashboard create-self-signed-cert
5 V& f2 j/ O. s) h# @1: c1 I9 j) C8 G! M% M7 Y9 _4 ~
配置dashboard
  L1 e! v- H+ q. b  h! {; eceph config set mgr mgr/dashboard/server_addr 10.40.65.148
5 _/ `! ~  }6 \5 A) q- o* C9 k) yceph config set mgr mgr/dashboard/server_port 80808 e! }7 z5 ?1 x; S  S
ceph config set mgr mgr/dashboard/ssl_server_port 8443
4 g# `  y" u/ r1 F& D7 x/ X11 a2 B9 t; d1 \5 I3 K' j
2) [( N9 q- ?7 P3 I
3
' \- O. C$ |4 y+ Z* c4 Q! H创建一个dashboard登录用户名密码# R! `/ p0 {  l& c+ d0 Z
echo '123456' > password.txt- z+ `! x' _0 D: y
ceph dashboard ac-user-create admin  administrator -i password.txt
  Q5 B. R5 L8 N7 O" y1: G9 g) G% U2 V9 j/ s5 W* |
2
$ m: A6 H4 z2 s查看服务访问方式% f3 a* ]( Z1 M8 |9 X. ]& _* |
ceph mgr services1 [- V5 U: A# u4 L6 y
1
' S! O. }; V8 |# f; J# z8 W通过web访问Ceph Dashboard,用户名密码为admin/123456
& h% X/ C' ]7 Ohttps://10.40.65.148:8443
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

如有购买积分卡请联系497906712

QQ|返回首页|Archiver|手机版|小黑屋|易陆发现 点击这里给我发消息

GMT+8, 2022-10-4 04:50 , Processed in 0.181734 second(s), 23 queries .

Powered by LR.LINUX.cloud bbs168x X3.2 Licensed

© 2012-2022 Comsenz Inc.

快速回复 返回顶部 返回列表