- ceph N版本pg通过ceph-objectstore-tool 导出备份、导入恢复 (0篇回复)
- ceph pg ls-by-osd查询osd上的pgid (0篇回复)
- ceph 报pg incomplete处理方法和步骤 (1篇回复)
- ceph osd导出备份pg和导入pg查找历史pg所在osd和其他副本pg所在osd (0篇回复)
- 每个pg可以启动recover操作的最大数 ceph 以及recovery速率介绍 (1篇回复)
- zkServer添加ssl协议支持方式 (1篇回复)
- zkserver启动失败问题总结 (0篇回复)
- HEALTH_WARN 15 daemons have recently crashed ceph health 告警解决过程 (1篇回复)
- HEALTH_WARN 1/5 mons down ceph 分布式存储检查状态告警 解决过程 (2篇回复)
- HEALTH_WARN Reduced data availability: 23 pgs inactive; 23 pgs not deep-scrubbed (2篇回复)
- 完美解决过程Reduced data availability: * pgs inactive 24 pgs not deep-scrubbed in tim (2篇回复)
- Reduced data availability: 20 pgs inactive Degraded data redundancy: 256 pgs (1篇回复)
- ceph存储backfills值和recover速度设置 (3篇回复)
- ceph 分布式存储出错解决过程HEALTH_ERR 2 scrub errors; Possible data damage: 2 pgs inconsist (0篇回复)
- 手动修复mon误删除恢复过程mon create-initial 完成恢复 (0篇回复)
- Ceph修改mon ip地址操作 (1篇回复)
- ceph 分布式存储crush的算法,相关运维操作统计 (0篇回复)
- HEALTH_WARN Reduced data availability 100.000% pgs, 100% pgs unknown unknown (0篇回复)
- ceph分布式运维总结 (0篇回复)
- dmsetup 取消osd 盘创建的lvm逻辑卷映射关系,ceph 分布式存储 (1篇回复)
- health: HEALTH_WARN Reduced data availability 100.000% pgs unknown (4篇回复)
- SSD的TRIM原理和实践 (0篇回复)
- HEALTH_ERR 1 scrub errors Possible data damage: 1 pg inconsistent 处理过程并恢复 (0篇回复)
- ERROR cinder.service [req-5a2a750d-da1e-49f2-9067-a67db8a9d953 - - - - -] Except (1篇回复)
- ceph-volume lvm list 可以定位磁盘的相关信息 虽然使用的是ceph-deploy来执行 (2篇回复)
- ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such fi (2篇回复)
- ceph health Module 'volumes' has failed dependency: No module named enum (2篇回复)
- health: HEALTH_ERR full ratio(s) out of order ceph s (0篇回复)
- ceph 14.2版本手动部署步骤总结 (2篇回复)
- 1 Large omap objects ceph health deatil (2篇回复)
- log_channel(cluster) log [ERR] bad backtrace on directory inode ceph 报错 (1篇回复)
- ceph tell mds.monxx client ls 检查ceph 文件系统挂载和驱逐 (0篇回复)
- 使用ceph-deploy 安装rgw,错误 (0篇回复)
- Install and Configure a Simple Ceph Object Gateway (0篇回复)
- ceph分布式存储rgw网关,分布式文件系统 (0篇回复)
- ceph 磁盘ceph device ls查看磁盘的哪个是数据盘哪个是日志盘 (2篇回复)
- ceph分布式存储 recovering速度控制 (4篇回复)
- osd数量少,pg少,默认pg特别少,可以使用下面命令增加mon_pg_warn_max_per_osd (0篇回复)
- ceph 分布式存储出现1 filesystem is degraded 源数据删除恢复 (1篇回复)
- ceph 文件存储灾难性恢复 (0篇回复)
- 记一次模拟测试cephfs reset恢复状态 1 filesystem is degraded (1篇回复)
- ceph 集群处理stale的pg (0篇回复)
- ceph分布式集群出现pgunfound objects处理过程记录 (0篇回复)
- cephfs 创建文件系统,已经恢复过程 1 filesystem is degraded (0篇回复)
- openstack 上已经使用的镜像因需要改变相关配置,无法删除,使用rbd export 导出镜像,并修改镜像内容,rbd import恢复镜像... (3篇回复)
- HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent 解决过程 (0篇回复)
- 记一次模拟测试cephfs reset恢复状态 (0篇回复)
- ceph mds模拟 数据恢复 (1篇回复)
- pg设置值调整 (1篇回复)
- pg修复 ceph之pg修复 过程记录 (0篇回复)