免费A级毛片无码专区网站-成人国产精品视频一区二区-啊 日出水了 用力乖乖在线-国产黑色丝袜在线观看下-天天操美女夜夜操美女-日韩网站在线观看中文字幕-AV高清hd片XXX国产-亚洲av中文字字幕乱码综合-搬开女人下面使劲插视频

分布式存儲(chǔ)系統(tǒng)之Ceph集群訪問接口啟用

前文我們使用ceph-deploy工具簡(jiǎn)單拉起了ceph底層存儲(chǔ)集群RADOS,回顧請(qǐng)參考https://www.cnblogs.com/qiuhom-1874/p/16724473.html;今天我們來聊一聊ceph集群訪問接口相關(guān)話題;
我們知道RADOS集群是ceph底層存儲(chǔ)集群,部署好RADOS集群以后,默認(rèn)只有RBD(Rados Block Device)接口;但是該接口并不能使用;這是因?yàn)樵谑褂胷ados存儲(chǔ)集群存取對(duì)象數(shù)據(jù)時(shí),都是通過存儲(chǔ)池找到對(duì)應(yīng)pg,然后pg找到對(duì)應(yīng)的osd,由osd通過librados api接口將數(shù)據(jù)存儲(chǔ)到對(duì)應(yīng)的osd所對(duì)應(yīng)的磁盤設(shè)備上;
啟用Ceph塊設(shè)備接口(RBD)
對(duì)于RBD接口來說,客戶端基于librbd即可將RADOS存儲(chǔ)集群用作塊設(shè)備,不過,用于rbd的存儲(chǔ)池需要事先啟用rbd功能并進(jìn)行初始化;
1、創(chuàng)建RBD存儲(chǔ)池
[root@ceph-admin ~]# ceph osd pool create rbdpool 64 64pool 'rbdpool' created[root@ceph-admin ~]# ceph osd pool lstestpoolrbdpool[root@ceph-admin ~]#2、啟用RBD存儲(chǔ)池RBD功能
查看ceph osd pool 幫助
Monitor commands: =================osd pool application disable <poolname> <app> {--yes-disables use of an application <app> on pool i-really-mean-it}<poolname>osd pool application enable <poolname> <app> {--yes-i- enable use of an application <app> [cephfs,rbd,rgw] really-mean-it}on pool <poolname>提示:ceph osd pool application disable 表示禁用對(duì)應(yīng)存儲(chǔ)池上的對(duì)應(yīng)接口功能,enbale 表示啟用對(duì)應(yīng)功能;后面的APP值只接受cephfs、rbd和rgw;
[root@ceph-admin ~]# ceph osd pool application enable rbdpool rbdenabled application 'rbd' on pool 'rbdpool'[root@ceph-admin ~]#3、初始化RBD存儲(chǔ)池
[root@ceph-admin ~]# rbd pool init -p rbdpool[root@ceph-admin ~]#驗(yàn)證rbdpool是否成功初始化,對(duì)應(yīng)rbd應(yīng)用是否啟用?

分布式存儲(chǔ)系統(tǒng)之Ceph集群訪問接口啟用

文章插圖
提示:使用ceph osd pool ls detail命令就能查看存儲(chǔ)池詳細(xì)信息;到此rbd存儲(chǔ)池就初始化完成;但是,rbd存儲(chǔ)池不能直接用于塊設(shè)備,而是需要事先在其中按需創(chuàng)建映像(image),并把映像文件作為塊設(shè)備使用;
4、在rbd存儲(chǔ)池里創(chuàng)建image
[root@ceph-admin ~]# rbd create --size 5G rbdpool/rbd-img01[root@ceph-admin ~]# rbd ls rbdpoolrbd-img01[root@ceph-admin ~]#查看創(chuàng)建image的信息
[root@ceph-admin ~]# rbd info rbdpool/rbd-img01rbd image 'rbd-img01':size 5 GiB in 1280 objectsorder 22 (4 MiB objects)id: d4466b8b4567block_name_prefix: rbd_data.d4466b8b4567format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenop_features:flags:create_timestamp: Sun Sep 25 11:25:01 2022[root@ceph-admin ~]#提示:可以看到對(duì)應(yīng)image支持分層鏡像,排他鎖,對(duì)象映射等等特性;到此一個(gè)5G大小的磁盤image就創(chuàng)建好了;客戶端可以基于內(nèi)核發(fā)現(xiàn)機(jī)制將對(duì)應(yīng)image識(shí)別成一塊磁盤設(shè)備進(jìn)行使用;
啟用radosgw接口
RGW并非必須的接口,僅在需要用到與S3和Swift兼容的RESTful接口時(shí)才需要部署RGW實(shí)例;radosgw接口依賴ceph-rgw進(jìn)程對(duì)外提供服務(wù);所以我們要啟用radosgw接口,就需要在rados集群上運(yùn)行ceph-rgw進(jìn)程;
1、部署ceph-radosgw
[root@ceph-admin ~]# ceph-deploy rgw --helpusage: ceph-deploy rgw [-h] {create} ...Ceph RGW daemon managementpositional arguments:{create}createCreate an RGW instanceoptional arguments:-h, --helpshow this help message and exit[root@ceph-admin ~]#提示:ceph-deploy rgw命令就只有一個(gè)create子命令用于創(chuàng)建RGW實(shí)例;
[root@ceph-admin ~]# su - cephadmLast login: Sat Sep 24 23:16:00 CST 2022 on pts/0[cephadm@ceph-admin ~]$ cd ceph-cluster/[cephadm@ceph-admin ceph-cluster]$ ceph-deploy rgw create ceph-mon01[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf[ceph_deploy.cli][INFO] Invoked (2.0.1): /bin/ceph-deploy rgw create ceph-mon01[ceph_deploy.cli][INFO] ceph-deploy options:[ceph_deploy.cli][INFO]username: None[ceph_deploy.cli][INFO]verbose: False[ceph_deploy.cli][INFO]rgw: [('ceph-mon01', 'rgw.ceph-mon01')][ceph_deploy.cli][INFO]overwrite_conf: False[ceph_deploy.cli][INFO]subcommand: create[ceph_deploy.cli][INFO]quiet: False[ceph_deploy.cli][INFO]cd_conf: <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa658caff80>[ceph_deploy.cli][INFO]cluster: ceph[ceph_deploy.cli][INFO]func: <function rgw at 0x7fa6592f5140>[ceph_deploy.cli][INFO]ceph_conf: None[ceph_deploy.cli][INFO]default_release: False[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-mon01:rgw.ceph-mon01[ceph-mon01][DEBUG ] connection detected need for sudo[ceph-mon01][DEBUG ] connected to host: ceph-mon01[ceph-mon01][DEBUG ] detect platform information from remote host[ceph-mon01][DEBUG ] detect machine type[ceph_deploy.rgw][INFO] Distro info: CentOS Linux 7.9.2009 Core[ceph_deploy.rgw][DEBUG ] remote host will use systemd[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-mon01[ceph-mon01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf[ceph-mon01][WARNIN] rgw keyring does not exist yet, creating one[ceph-mon01][DEBUG ] create a keyring file[ceph-mon01][DEBUG ] create path recursively if it doesn't exist[ceph-mon01][INFO] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-mon01 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-mon01/keyring[ceph-mon01][INFO] Running command: sudo systemctl enable ceph-radosgw@rgw.ceph-mon01[ceph-mon01][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-mon01.service to /usr/lib/systemd/system/ceph-radosgw@.service.[ceph-mon01][INFO] Running command: sudo systemctl start ceph-radosgw@rgw.ceph-mon01[ceph-mon01][INFO] Running command: sudo systemctl enable ceph.target[ceph_deploy.rgw][INFO] The Ceph Object Gateway (RGW) is now running on host ceph-mon01 and default port 7480[cephadm@ceph-admin ceph-cluster]$

經(jīng)驗(yàn)總結(jié)擴(kuò)展閱讀