- slow ops, oldest one blocked for 15468 sec, ... has slow ops 解决办法 (0篇回复)
- 2 scrub errors; Possible data damage;ceph分布式存储报错解决 (1篇回复)
- ceph 分布式存储bucket对象回收lifecycle机制 (2篇回复)
- HEALTH_WARN OSD count 2 < osd_pool_default_size 3 (0篇回复)
- ceph osd crush rule 创建rule 角色 (2篇回复)
- ceph osd 操作删除root buckets 默认值 (0篇回复)
- ceph osd rule 删除,查看操作流程 (0篇回复)
- ceph osd crush rm ssd-smalldata 删除 Error EBUSY: (16) Device or resource busy (0篇回复)
- ceph存储对osd tree相关的root 和bucket 操作记录过程 (0篇回复)
- HEALTH_WARN 1 daemons have recently crashed 解决过程 (0篇回复)
- Ceph 基础篇 -对象存储使用 (1篇回复)
- ceph 对象存储rgw常用命令 (0篇回复)
- Ceph对象存储如何部署 (1篇回复)
- ceph-deploy之配置安装使用对象存储 (1篇回复)
- ceph -s :[errno 2] error connecting to the cluster (0篇回复)
- ceph 中使用rbd_flatten_volume_from_snapshot的意义 (2篇回复)
- guestmount: no operating system was found on this disk (1篇回复)
- ubuntu 系统添加apt-get openstack repository (1篇回复)
- 记一次rbd 删除snap 镜像失败问题SnapshotUnprotectReq rbd: listing children failed uest (0篇回复)
- 关闭新创建卷的exclusive-lock功能 ceph 创建的镜像保留layering (5篇回复)
- openstack ceph中出现images有多个features (rbd feature disable/enable) (0篇回复)
- 记一次ceph存储同步问题noout,nobackfill,norecover flag(s) set Degraded data r (1篇回复)
- Error: Package: python2-qpid-proton-0.26.0-2.el7.x86_64 (centos-openstack-train) (2篇回复)
- 无法删除snapshot问题解决方法: (0篇回复)
- Ceph iSCSI gateway 安装和配置 (2篇回复)
- openstack train版yum源更新 (0篇回复)
- 查看osdmap 在ceph中查看 (0篇回复)
- ceph rados 查询objects命令; (0篇回复)
- ceph bluestore与 filestore 数据存放的区别 (2篇回复)
- Ceph 的十年经验总结:文件系统是否适合做分布式文件系统的后端 (1篇回复)
- ceph 存储BlueStore的OSD创建与启动 (4篇回复)
- Ceph分布式存储 OSD从filestore 转换到 bluestore的方法 (1篇回复)
- 分布式存储Ceph rbd-mirror灾备同步数据 (0篇回复)
- ceph 高级篇 RBD块设备回收站、快照、克隆 (1篇回复)
- Reduced data availability: 2 pgs inactive (1篇回复)
- ceph 存储测试工具详解 (2篇回复)
- ceph rados 相关命令以及清除pool池所有数据 (1篇回复)
- 扫描pg卷 (0篇回复)
- ceph 优化和运维注意事项 (0篇回复)
- ceph测试可用性和ceph压力测试 (1篇回复)
- ceph 添加用户时报错Error EINVAL: Please specify the file containing the password/secret (1篇回复)
- ceph config help mon_max_pg_per_osd查看默认值 (2篇回复)
- ceph osd 关闭开启数据均衡模式 (0篇回复)
- Ceph监控 ceph集群添加监控 (1篇回复)
- ceph存储删除mon节点命令 (2篇回复)
- 处理过程osd down掉了,服务状态正常HEALTH_WARN 2 osds down; Reduced data availability: 29 pgs (3篇回复)
- HEALTH_WARN Degraded data redundancy ... has slow ops (1篇回复)
- 记录ceph rbd删除过程Removing image: 0% complete...failed. (2篇回复)
- ceph集群中的PG总数计算 (0篇回复)
- ceph health detail HEALTH_WARN 1 pools have many more objects per pg than averag (0篇回复)