一、Ceph集群的运行状态

集群状态:HEALTH_OK,HEALTH_WARN,HEALTH_ERR

1.1 常用查寻状态指令

[root@ceph2 ~]#    ceph health detail

HEALTH_OK

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48--4e96-a7ee-980ab989d20d
health: HEALTH_OK services:
mon: daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-// up {=ceph2=up:active}, up:standby
osd: osds: up, in; remapped pgs
rbd-mirror: daemon active data:
pools: pools, pgs
objects: objects, MB
usage: MB used, GB / GB avail
pgs: active+clean
active+clean+remapped

ceph -w是一样的,但是出于交互状态,可以试试更新

1.2 集群标志

noup:OSD启动时,会将自己在MON上标识为UP状态,设置该标志位,则OSD不会被自动标识为up状态

nodown:OSD停止时,MON会将OSD标识为down状态,设置该标志位,则MON不会将停止的OSD标识为down状态,设置noup和nodown可以防止网络抖动

noout:设置该标志位,则mon不会从crush映射中删除任何OSD。对OSD作维护时,可设置该标志位,以防止CRUSH在OSD停止时自动重平衡数据。OSD重新启动时,需要清除该flag

noin:设置该标志位,可以防止数据被自动分配到OSD上

norecover:设置该flag,禁止任何集群恢复操作。在执行维护和停机时,可设置该flag

nobackfill:禁止数据回填

noscrub:禁止清理操作。清理PG会在短期内影响OSD的操作。在低带宽集群中,清理期间如果OSD的速度过慢,则会被标记为down。可以该标记来防止这种情况发生

nodeep-scrub:禁止深度清理

norebalance:禁止重平衡数据。在执行集群维护或者停机时,可以使用该flag

pause:设置该标志位,则集群停止读写,但不影响osd自检

full:标记集群已满,将拒绝任何数据写入,但可读

1.3 集群flag操作

只能对整个集群操作,不能针对单个osd

设置为noout状态

[root@ceph2 ~]# ceph osd set noout

noout is set

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48--4e96-a7ee-980ab989d20d
health: HEALTH_WARN
noout flag(s) set services:
mon: daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-// up {=ceph2=up:active}, up:standby
osd: osds: up, in; remapped pgs
flags noout
rbd-mirror: daemon active data:
pools: pools, pgs
objects: objects, MB
usage: MB used, GB / GB avail
pgs: active+clean
active+clean+remapped io:
client: B/s rd, op/s rd, op/s wr

[root@ceph2 ~]# ceph osd unset noout

noout is unset

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48--4e96-a7ee-980ab989d20d
health: HEALTH_OK services:
mon: daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-// up {=ceph2=up:active}, up:standby
osd: osds: up, in; remapped pgs
rbd-mirror: daemon active data:
pools: pools, pgs
objects: objects, MB
usage: MB used, GB / GB avail
pgs: active+clean
active+clean+remapped io:
client: B/s rd, B/s wr, op/s rd, op/s wr

[root@ceph2 ~]# ceph osd set full

full is set

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48--4e96-a7ee-980ab989d20d
health: HEALTH_WARN
full flag(s) set services:
mon: daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-// up {=ceph2=up:active}, up:standby
osd: osds: up, in; remapped pgs
flags full
rbd-mirror: daemon active data:
pools: pools, pgs
objects: objects, MB
usage: MB used, GB / GB avail
pgs: active+clean
active+clean+remapped io:
client: B/s rd, B/s wr, op/s rd, op/s wr

[root@ceph2 ~]# rados -p ssdpool put testfull /etc/ceph/ceph.conf

-- ::14.250208 7f6500913e40  client..objecter FULL, paused modify 0x55d690a412b0 tid 

[root@ceph2 ~]# ceph osd unset full

full is unset

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48--4e96-a7ee-980ab989d20d
health: HEALTH_OK services:
mon: daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-// up {=ceph2=up:active}, up:standby
osd: osds: up, in; remapped pgs
rbd-mirror: daemon active data:
pools: pools, pgs
objects: objects, MB
usage: MB used, GB / GB avail
pgs: active+clean
active+clean+remapped io:
client: B/s rd, op/s rd, op/s wr

[root@ceph2 ~]# rados -p ssdpool put testfull /etc/ceph/ceph.conf

[root@ceph2 ~]# rados -p ssdpool ls

testfull
test

二、限制Pool配置更改

2.1 主要过程

禁止池被删除

osd_pool_default_flag_nodelete

禁止池的pg_num和pgp_num被修改

osd_pool_default_flag_nopgchange

禁止修改池的size和min_size

osd_pool_default_flag_nosizechang

2.2 实验操作

[root@ceph2 ~]# ceph daemon osd.0  config show|grep osd_pool_default_flag

  "osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "false",
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
"osd_pool_default_flags": "",

[root@ceph2 ~]# ceph tell osd.* injectargs --osd_pool_default_flag_nodelete true

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

  "osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "true",
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
"osd_pool_default_flags": "",

[root@ceph2 ~]# ceph osd pool delete ssdpool  ssdpool yes-i-really-really-mean-it

Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool ssdpool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.   #不能删除

改为false

[root@ceph2 ~]# ceph tell osd.* injectargs --osd_pool_default_flag_nodelete false

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

"osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "true", #依然显示为ture
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
"osd_pool_default_flags": ""

2.3 使用配置文件修改

在ceph1上修改

osd_pool_default_flag_nodelete false

[root@ceph1 ~]# ansible all -m copy -a 'src=/etc/ceph/ceph.conf dest=/etc/ceph/ceph.conf owner=ceph group=ceph mode=0644'

[root@ceph1 ~]# ansible mons -m shell -a ' systemctl restart ceph-mon.target'

[root@ceph1 ~]# ansible mons -m shell -a ' systemctl restart ceph-osd.target'

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

"osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "false",
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
"osd_pool_default_flags": "",

删除ssdpool

[root@ceph2 ~]# ceph osd pool delete ssdpool ssdpool --yes-i-really-really-mean-it

成功删除!!!

三、理解PG

3.1 PG的状态

Creating:PG正在被创建。通常当存储池被创建或者PG的数目被修改时,会出现这种状态

Active:PG处于活跃状态。可被正常读写

Clean:PG中的所有对象都被复制了规定的副本数

Down:PG离线

Replay:当某个OSD异常后,PG正在等待客户端重新发起操作

Splitting:PG正在初分割,通常在一个存储池的PG数增加后出现,现有的PG会被分割,部分对象被移动到新的PG

Scrubbing:PG正在做不一致校验

Degraded:PG中部分对象的副本数未达到规定数目

Inconsistent:PG的副本出现了不一致。如果出现副本不一致,可使用ceph pg repair来修复不一致情况

Peering:Perring是由主OSD发起的使用存放PG副本的所有OSD就PG的所有对象和元数据的状态达成一致的过程。Peering完成后,主OSD才会接受客户端写请求

Repair:PG正在被检查,并尝试修改被发现的不一致情况

Recovering:PG正在迁移或同步对象及副本。通常是一个OSD down掉之后的重平衡过程

Backfill:一个新OSD加入集群后,CRUSH会把集群现有的一部分PG分配给它,被称之为数据回填

Backfill-wait:PG正在等待开始数据回填操作

Incomplete:PG日志中缺失了一关键时间段的数据。当包含PG所需信息的某OSD不可用时,会出现这种情况

Stale:PG处理未知状态。monitors在PG map改变后还没收到过PG的更新。集群刚启动时,在Peering结束前会出现该状态

Remapped:当PG的acting set变化后,数据将会从旧acting set迁移到新acting set。新主OSD需要一段时间后才能提供服务。因此这会让老的OSD继续提供服务,直到PG迁移完成。在这段时间,PG状态就会出现Remapped

3.2  管理文件到PG的映射

[root@ceph2 ~]# ceph osd map test test

osdmap e288 pool 'test' () object 'test' -> pg .40e8aab5 (16.15) -> up ([,], p5) acting ([,,], p5)
test对象所在pg id为16.,存储在三个osd上,分别为osd.、osd.5和osd.,其中osd.5为primary osd
处于up状态的osd会一直留在PG的up set和acting set中,一旦主osd down,它首先会从up set中移除,然后从acting set中移除,之后从OSD将被升级为主。Ceph会将故障OSD上的PG恢复到一个新OSD上,然后再将这个新OSD加入到up和acting set中来维持集群的高可用性

3.3 管理stuck的状态PG

如果PG长时间(mon_pg_stuck_threshold,默认为300s)出现如下状态时,MON会将该PG标记为stuck:

inactive:pg有peering问题

unclean:pg在故障恢复时遇到问题

stale:pg没有任何OSD报告,可能其所有的OSD都是down和out

undersized:pg没有充足的osd来存储它应具有的副本数

默认情况下,Ceph会自动执行恢复,但如果未成自动恢复,则集群状态会一直处于HEALTH_WARN或者HEALTH_ERR

如果特定PG的所有osd都是down和out状态,则PG会被标记为stale。要解决这一情况,其中一个OSD必须要重生,且具有可用的PG副本,否则PG不可用

Ceph可以声明osd或PG已丢失,这也就意味着数据丢失。

需要说明的是,osd的运行离不开journal,如果journal丢失,则osd停止

3.4 stuck的状态pg操作

检查处于stuck状态的pg

[root@ceph2 ceph]# ceph pg dump_stuck

ok
PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
17.5 stale+peering [,] [,]
17.4 stale+peering [,] [,]
17.3 stale+peering [,] [,]
17.2 stale+peering [,] [,]
17.1 stale+peering [,] [,]
17.0 stale+peering [,] [,]
17.1f stale+peering [,] [,]
.1e stale+peering [,] [,]
17.1d stale+peering [,] [,]
.1c stale+peering [,] [,]
17.6 stale+peering [,] [,]
17.11 stale+peering [,] [,]
17.7 stale+peering [,] [,]
17.8 stale+peering [,] [,]
17.13 stale+peering [,] [,]
17.9 stale+peering [,] [,]
17.10 stale+peering [,] [,]
.a stale+peering [,] [,]
17.15 stale+peering [,] [,]
.b stale+peering [,] [,]
17.12 stale+peering [,] [,]
.c stale+peering [,] [,]
17.17 stale+peering [,] [,]
.d stale+peering [,] [,]
17.14 stale+peering [,] [,]
.e stale+peering [,] [,]
17.19 stale+peering [,] [,]
.f stale+peering [,] [,]
17.16 stale+peering [,] [,]
17.18 stale+peering [,] [,]
.1a stale+peering [,] [,]
.1b stale+peering [,] [,]
[root@ceph2 ceph]# ceph osd blocked-by
osd num_blocked

检查导致pg一致阻塞在peering状态的osd

ceph osd blocked-by

检查某个pg的状态

ceph pg dump |grep pgid

声明pg丢失

ceph pg pgid mark_unfound_lost revert|delete

声明osd丢失(需要osd状态为down且out)

ceph osd lost osdid --yes-i-really-mean-it


博主声明:本文的内容来源主要来自誉天教育晏威老师,由本人实验完成操作验证,需要的博友请联系誉天教育(http://www.yutianedu.com/),获得官方同意或者晏老师(https://www.cnblogs.com/breezey/)本人同意即可转载,谢谢!

最新文章

  1. js图片前端预览之 filereader 和 window.URL.createObjectURL
  2. ASP.NET中将导出的数据以UTF-8编码方式进行存储
  3. 【 2013 Multi-University Training Contest 6 】
  4. MyBatis 3与spring整合之使用SqlSession
  5. GNU make 总结 (三)
  6. RFID开发笔记 Alien阅读器文档
  7. VMWare12 安装 OSX 10.10
  8. 使用cvs或svn从sourceforge上获取开源项目的方法[转载]
  9. Homebrew 1.0.0 发布,MacOS 上的包管理器
  10. ngRx 官方示例分析 - 3. reducers
  11. 用pip下载的python模块怎么在PyCharm中引入报错
  12. MySQL批量插入数据的几种方法
  13. 51job_selenium测试2
  14. CodeChef - ELHIDARR Find an element in hidden array(互动题)题解
  15. http解析过程
  16. Luogu5110 块速递推
  17. mac java环境变量配置
  18. AX5511 Boost Converter
  19. 我与前端之间不得不说的三天两夜之html基础
  20. 【一步一步走(1)】远程桌面软件VNC的安装与配置

热门文章

  1. 阿里开源!轻量级深度学习端侧推理引擎 MNN
  2. @总结 - 1@ 多项式乘法 —— FFT
  3. js+canvas制作前端验证码
  4. oracle访问Table的方式
  5. uni-app获取当前位置
  6. day6_python之configparser_模块
  7. iptables 网址转译 (Network Address Translation,NAT)
  8. 微信小程序封装自定义弹窗
  9. [转]Java多线程学习(总结很详细!!!)
  10. Python--day62--ORM的使用