admin 发表于 2023-7-2 08:26:45

一次pgs,1 scrub errors 修复过程。

# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 1.20b3 is active+clean+inconsistent, acting
1 scrub errors
# ceph pgrepair 1.20b3
instructing pg 1.20b3 on osd.117 to repair
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 pgs repair; 1 scrub errors
pg 1.20b3 is active+clean+scrubbing+deep+inconsistent+repair, acting
1 scrub errors
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 pgs repair; 1 scrub errors
pg 1.20b3 is active+clean+scrubbing+deep+inconsistent+repair, acting
1 scrub errors
# ceph -s
    cluster 70d27aec-742e-4a95-b000-cf37ebba35d0
   health HEALTH_ERR
            1 pgs inconsistent
            1 pgs repair
            1 scrub errors
    monmap e3: 3 mons at {compute1=12.24.14.5:6789/0,compute2=12.24.14.6:6789/0,compute3=12.24.14.7:6789/0}
            election epoch 310, quorum 0,1,2 compute1,compute2,compute3
   osdmap e26007: 131 osds: 128 up, 128 in
            flags sortbitwise,require_jewel_osds
      pgmap v157378710: 10240 pgs, 1 pools, 102 TB data, 26971 kobjects
            306 TB used, 159 TB / 465 TB avail
               10234 active+clean
                   3 active+clean+scrubbing+deep
                   2 active+clean+scrubbing
                   1 active+clean+scrubbing+deep+inconsistent+repair
client io 1661 kB/s rd, 51656 kB/s wr, 1616 op/s rd, 4889 op/s wr
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 pgs repair; 1 scrub errors
pg 1.20b3 is active+clean+scrubbing+deep+inconsistent+repair, acting
1 scrub errors
You have new mail in /var/spool/mail/root
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 pgs repair; 1 scrub errors
pg 1.20b3 is active+clean+scrubbing+deep+inconsistent+repair, acting
1 scrub errors
# ceph -s
    cluster 70d27aec-742e-4a95-b000-cf37ebba35d0
   health HEALTH_ERR
            1 pgs inconsistent
            1 pgs repair
            1 scrub errors
    monmap e3: 3 mons at {compute1=12.24.14.5:6789/0,compute2=12.24.14.6:6789/0,compute3=12.24.14.7:6789/0}
            election epoch 310, quorum 0,1,2 compute1,compute2,compute3
   osdmap e26007: 131 osds: 128 up, 128 in
            flags sortbitwise,require_jewel_osds
      pgmap v157378813: 10240 pgs, 1 pools, 102 TB data, 26971 kobjects
            306 TB used, 159 TB / 465 TB avail
               10233 active+clean
                   5 active+clean+scrubbing
                   1 active+clean+scrubbing+deep+inconsistent+repair
                   1 active+clean+scrubbing+deep
client io 143 kB/s rd, 160 MB/s wr, 1207 op/s rd, 3519 op/s wr
# ceph -s
    cluster 70d27aec-742e-4a95-b000-cf37ebba35d0
   health HEALTH_ERR
            1 pgs inconsistent
            1 pgs repair
            1 scrub errors
    monmap e3: 3 mons at {compute1=12.24.14.5:6789/0,compute2=12.24.14.6:6789/0,compute3=12.24.14.7:6789/0}
            election epoch 310, quorum 0,1,2 compute1,compute2,compute3
   osdmap e26007: 131 osds: 128 up, 128 in
            flags sortbitwise,require_jewel_osds
      pgmap v157378815: 10240 pgs, 1 pools, 102 TB data, 26971 kobjects
            306 TB used, 159 TB / 465 TB avail
               10234 active+clean
                   4 active+clean+scrubbing
                   1 active+clean+scrubbing+deep+inconsistent+repair
                   1 active+clean+scrubbing+deep
client io 5790 kB/s rd, 189 MB/s wr, 2338 op/s rd, 4565 op/s wr
# ceph -s
    cluster 70d27aec-742e-4a95-b000-cf37ebba35d0
   health HEALTH_ERR
            1 pgs inconsistent
            1 pgs repair
            1 scrub errors
   monmap e3: 3 mons at {compute1=12.24.14.5:6789/0,compute2=12.24.14.6:6789/0,compute3=12.24.14.7:6789/0}
            election epoch 310, quorum 0,1,2 compute1,compute2,compute3
   osdmap e26007: 131 osds: 128 up, 128 in
            flags sortbitwise,require_jewel_osds
      pgmap v157378817: 10240 pgs, 1 pools, 102 TB data, 26971 kobjects
            306 TB used, 159 TB / 465 TB avail
               10234 active+clean
                   4 active+clean+scrubbing
                   1 active+clean+scrubbing+deep+inconsistent+repair
                   1 active+clean+scrubbing+deep
client io 1274 kB/s rd, 117 MB/s wr, 1951 op/s rd, 4588 op/s wr
# ceph -s
    cluster 70d27aec-742e-4a95-b000-cf37ebba35d0
   health HEALTH_OK
   monmap e3: 3 mons at {compute1=12.24.14.5:6789/0,compute2=12.24.14.6:6789/0,compute3=12.24.14.7:6789/0}
            election epoch 310, quorum 0,1,2 compute1,compute2,compute3
   osdmap e26007: 131 osds: 128 up, 128 in
            flags sortbitwise,require_jewel_osds
      pgmap v157378917: 10240 pgs, 1 pools, 102 TB data, 26971 kobjects
            306 TB used, 159 TB / 465 TB avail
               10236 active+clean
                   3 active+clean+scrubbing
                   1 active+clean+scrubbing+deep
client io 1746 kB/s rd, 127 MB/s wr, 2117 op/s rd, 5036 op/s wr
You have mail in /var/spool/mail/root


页: [1]
查看完整版本: 一次pgs,1 scrub errors 修复过程。