ceph-deploy 部署ceph-14.2.22 nautilus二进制集群

2024-05-20 1,086 0

很多生产环境,ceph是不跑在容器内,而是单独找服务器做成ceph集群,只是跑ceph集群不运行其它应用。

本次使用ceph-deploy快速构建ceph集群并创建块存储、文件系统、以及对象存储。

centos7安装15.x.xx版本会有问题, 这里安装14.x.xx

image.png

环境准备

集群信息

Ceph-deploy:2.0.1
Ceph:Nautilus(14.2.22)
System: CentOS Linux release 7.9.2009

IP 主机名 服务 附加硬盘(OSD)
192.168.77.41 ceph01 mon、mgr、osd 100G数据盘
192.168.77.42 ceph02 mon、mgr、osd 100G数据盘
192.168.77.43 ceph03 mon、mgr、osd 100G数据盘

如果环境允许,可以用一个 ceph-admin 节点专门放置 mon,mgr,mds 等这些组件,osd 放置在其他节点,更便于管理

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

设置主机名

hostnamectl set-hostname ceph01
bash

配置host解析

cat >> /etc/hosts <<EOF
192.168.77.41 ceph01
192.168.77.42 ceph02
192.168.77.43 ceph03
EOF

配置yum源加速

ceph 需要epel源

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

配置ssh免密钥登陆

yum install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export HOSTS="ceph01 ceph02 ceph03"
export SSHPASS=123456 # 密码 
for HOST in $HOSTS;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST;done

时间同步

yum -y install chrony

ceph01作为chorny 主服务

# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
allow 192.168.77.0/24
systemctl restart chronyd
systemctl enable chronyd
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0

ceph02、ceph03 配置chrony server指向 ceph01 IP

# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.77.41 iburst
systemctl restart chronyd
systemctl enable chronyd
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0

查看

chronyc -a makestep
chronyc sourcestats
chronyc sources -v

ceph部署

配置ceph源

cat > /etc/yum.repos.d/ceph.repo << EOF
[norch]
name=norch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0

[x86_64]
name=norch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
EOF

指定安装版本源

export CEPH_DEPLOY_REPO_URL=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7
export CEPH_DEPLOY_GPG_URL=https://mirrors.aliyun.com/ceph/keys/release.asc

安装必要软件包

yum install -y vim wget curl net-tools bash-completion 

安装ceph-deploy部署工具

ceph01部署

yum install -y python-setuptools ceph-deploy
[root@ceph01 ~]# ceph-deploy --version
2.0.1

部署Mon

monitor负责保存OSD的元数据,所以monitor当然也需要高可用。 这里的monitor推荐使用奇数节点进行部署,我这里以3台节点部署

mkdir /etc/ceph && cd /etc/ceph

创建Mon

# 集群高可用 无单独存储网络
ceph-deploy new --public-network 192.168.77.0/24 ceph0{1,2,3}

# 单机 有单独规划存储网络
# ceph-deploy new --public-network 192.168.77.0/24 --cluster-network 192.168.13.0/24 ceph1  

# 单机 无单独存储网络 也可以部署单机后面扩容集群
# ceph-deploy new --public-network 192.168.77.0/24 ceph1    
# ceph-deploy mon add ceph02

# --cluster-network 集群内通信的网络
# --public-network 集群对外的网络

执行后 生成的文件

[root@ceph01 ceph]# ls -l
total 12
-rw-r--r-- 1 root root  231 Apr  9 13:29 ceph.conf # ceph配置文件
-rw-r--r-- 1 root root 2992 Apr  9 13:29 ceph-deploy-ceph.log # ceph日志
-rw------- 1 root root   73 Apr  9 13:29 ceph.mon.keyring # keyring身份验证

[root@ceph01 ceph]# cat ceph.conf 

[global]
fsid = ed040fb0-fa20-456a-a9f0-c9a96cdf089e
public_network = 192.168.77.0/24
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.77.41,192.168.77.42,192.168.77.43
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

添加配置参数

允许ceph时间偏移

echo "mon clock drift allowed = 2" >> /etc/ceph/ceph.conf
echo "mon clock drift warn backoff = 30" >> /etc/ceph/ceph.conf

所有节点安装

yum install -y ceph ceph-mon ceph-mgr ceph-mds ceph-radosgw

ceph01 执行

export CEPH_DEPLOY_REPO_URL=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7
export CEPH_DEPLOY_GPG_URL=https://mirrors.aliyun.com/ceph/keys/release.asc
ceph-deploy install --release nautilus ceph01 ceph02 ceph03

初始化Mon

ceph-deploy mon create-initial
[root@ceph01 ceph]# ls -l
total 224
-rw------- 1 root root    113 Apr 12 13:43 ceph.bootstrap-mds.keyring
-rw------- 1 root root    113 Apr 12 13:43 ceph.bootstrap-mgr.keyring
-rw------- 1 root root    113 Apr 12 13:43 ceph.bootstrap-osd.keyring
-rw------- 1 root root    113 Apr 12 13:43 ceph.bootstrap-rgw.keyring
-rw------- 1 root root    151 Apr 12 13:43 ceph.client.admin.keyring
-rw-r--r-- 1 root root    292 Apr 12 13:43 ceph.conf
-rw-r--r-- 1 root root 157631 Apr 12 13:43 ceph-deploy-ceph.log
-rw------- 1 root root     73 Apr  9 13:53 ceph.mon.keyring
-rw-r--r-- 1 root root     92 Jun 30  2021 rbdmap

同步生成的文件到各个节点

ceph-deploy admin ceph01 ceph02 ceph03

禁用不安全模式

ceph config set mon auth_allow_insecure_global_id_reclaim false

可以看到有3个mon节点在运行
当我们添加上3个monitor节点后,monitor会自动进行选举,自动进行高可用

[root@ceph01 ceph]# ceph -s
  cluster:
    id:     ed040fb0-fa20-456a-a9f0-c9a96cdf089e
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 117s)  # 包含3个mon
    mgr: no daemons active # mgr还没创建
    osd: 0 osds: 0 up, 0 in # osd还没创建

  data: # 资源池还没创建
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:    

查看monitor选举情况,以及集群的健康状态

[root@ceph01 ceph]# ceph quorum_status --format json-pretty

{
    "election_epoch": 6,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "ceph01",
        "ceph02",
        "ceph03"
    ],
    "quorum_leader_name": "ceph01", # 当前leader
    "quorum_age": 684,
    "monmap": {
        "epoch": 1,
        "fsid": "ed040fb0-fa20-456a-a9f0-c9a96cdf089e",
        "modified": "2024-04-15 11:48:15.353706",
        "created": "2024-04-15 11:48:15.353706",
        "min_mon_release": 14,
        "min_mon_release_name": "nautilus",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "ceph01",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.77.41:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.77.41:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.77.41:6789/0",
                "public_addr": "192.168.77.41:6789/0"
            },
            {
                "rank": 1,
                "name": "ceph02",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.77.42:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.77.42:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.77.42:6789/0",
                "public_addr": "192.168.77.42:6789/0"
            },
            {
                "rank": 2,
                "name": "ceph03",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.77.43:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.77.43:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.77.43:6789/0",
                "public_addr": "192.168.77.43:6789/0"
            }
        ]
    }
}

使用dump参数查看关于monitor更细的信息

[root@ceph01 ceph]# ceph mon dump
epoch 1
fsid ed040fb0-fa20-456a-a9f0-c9a96cdf089e
last_changed 2024-04-15 11:48:15.353706
created 2024-04-15 11:48:15.353706
min_mon_release 14 (nautilus)
0: [v2:192.168.77.41:3300/0,v1:192.168.77.41:6789/0] mon.ceph01
1: [v2:192.168.77.42:3300/0,v1:192.168.77.42:6789/0] mon.ceph02
2: [v2:192.168.77.43:3300/0,v1:192.168.77.43:6789/0] mon.ceph03
dumped monmap epoch 1

部署Mgr

Ceph-MGR目前的主要功能是把集群的一些指标暴露给外界使用

mgr集群只有一个节点为active状态,其它的节点都为standby。只有当主节点出现故障后,standby节点才会去接管,并且状态变更为active

# 集群 
ceph-deploy mgr create ceph0{1,2,3}

# 单机 + 扩容
ceph-deploy mgr create ceph01
ceph-deploy mgr create ceph02 ceph03

可以看到有一个刚添加的mgr在运行

[root@ceph01 ceph]# ceph -s
  cluster:
    id:     ed040fb0-fa20-456a-a9f0-c9a96cdf089e
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 6m)
    mgr: ceph01(active, since 10s), standbys: ceph02, ceph03 # 状态为active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:  

部署OSD

这里使用sdb 作为osd盘

[root@ceph01 ~]# fdisk -l

Disk /dev/sda: 36.5 GB, 36507222016 bytes, 71303168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000eb3ad

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1050623      524288   83  Linux
/dev/sda2         1050624    71303167    35126272   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

ceph查看硬盘情况

ceph-deploy disk list ceph01 ceph02 ceph03

格式化磁盘

ceph-deploy disk zap ceph01 /dev/sdb
ceph-deploy disk zap ceph02 /dev/sdb
ceph-deploy disk zap ceph03 /dev/sdb
[root@ceph01 ceph]# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   34G  0 disk 
|-sda1   8:1    0  512M  0 part /boot
`-sda2   8:2    0 33.5G  0 part /
sdb      8:16   0  100G  0 disk 
sr0     11:0    1 1024M  0 rom

创建OSD

cd /etc/ceph
ceph-deploy osd create ceph01 --data /dev/sdb
ceph-deploy osd create ceph02 --data /dev/sdb
ceph-deploy osd create ceph03 --data /dev/sdb

health: HEALTH_OK 同步正常

[root@ceph01 ceph]# ceph -s
  cluster:
    id:     ed040fb0-fa20-456a-a9f0-c9a96cdf089e
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 10m)
    mgr: ceph01(active, since 4m), standbys: ceph02, ceph03
    osd: 3 osds: 3 up (since 5s), 3 in (since 5s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:  

[root@ceph01 ceph]# ceph osd status

+----+--------+-------+-------+--------+---------+--------+---------+-----------+
| id |  host  |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+--------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | ceph01 | 1025M | 98.9G |    0   |     0   |    0   |     0   | exists,up |
| 1  | ceph02 | 1025M | 98.9G |    0   |     0   |    0   |     0   | exists,up |
| 2  | ceph03 | 1025M | 98.9G |    0   |     0   |    0   |     0   | exists,up |
+----+--------+-------+-------+--------+---------+--------+---------+-----------+

[root@ceph01 ceph]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.29306 root default                            
-3       0.09769     host ceph01                         
 0   hdd 0.09769         osd.0       up  1.00000 1.00000 
-5       0.09769     host ceph02                         
 1   hdd 0.09769         osd.1       up  1.00000 1.00000 
-7       0.09769     host ceph03                         
 2   hdd 0.09769         osd.2       up  1.00000 1.00000 

这里部署3个osd完毕

同步配置

如果期间我们有需要修改cpeh.conf的操作,只需要在ceph01上修改,使用下面的命令同步到其他节点上

ceph-deploy --overwrite-conf config push ceph01 ceph02 ceph03
# 查看osd运行状态
ceph osd stat
# 查看osd映射信息
ceph osd dump
# 查看数据延迟
ceph osd perf
# 详细列出集群每块磁盘的使用情况
ceph osd df
# 查看osd目录树
ceph osd tree
# 查看最大osd的个数
ceph osd getmaxosd

# 在ceph.conf上配置删除pool的参数
vim /etc/ceph/ceph.conf 添加
mon_allow_pool_delete = true
mon_max_pg_per_osd = 2000
# 同步至其他节点
ceph-deploy  --overwrite-conf admin ceph01 ceph02 ceph03

# 重启监控服务
systemctl restart ceph-mon.target
systemctl status ceph-mon.target
# 删除时,pool名输入两次,后再接--yes-i-really-really-mean-it参数就可以删除
ceph osd pool delete test_pool test_pool --yes-i-really-really-mean-it
# 或者
rados rmpool test_pool test_pool --yes-i-really-really-mean-it

dashboard

yum install -y ceph-radosgw ceph-mgr-dashboard
# 启用dashboard
ceph mgr module enable dashboard

# 查看mgr module 帮助
# ceph mgr module --help
# ceph mgr module ls

# 创建自签证书
ceph dashboard create-self-signed-cert

# 设置用户和密码
echo admin123 > /opt/ceph-password
ceph dashboard ac-user-create admin -i /opt/ceph-password

[root@ceph01 ceph]# ceph -s |grep mgr
    mgr: ceph01(active, since 10m), standbys: ceph02, ceph03
[root@ceph01 ceph]# ceph mgr services

{
    "dashboard": "https://ceph01:8443/"
}

访问https://192.168.77.41:8443 或配置host解析 使用域名访问
用户名:admin 密码:admin123登录

报错解决

--> Finished Dependency Resolution
Error: Package: 2:ceph-common-14.2.22-0.el7.x86_64 (x86_64)
           Requires: liboath.so.0()(64bit)
Error: Package: 2:ceph-base-14.2.22-0.el7.x86_64 (x86_64)
           Requires: libleveldb.so.1()(64bit)
Error: Package: 2:ceph-mgr-14.2.22-0.el7.x86_64 (x86_64)
           Requires: python-pecan
Error: Package: 2:ceph-base-14.2.22-0.el7.x86_64 (x86_64)
           Requires: liboath.so.0(LIBOATH_1.2.0)(64bit)

解决: 安装epel源
health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
解决:禁用不安全模式
ceph config set mon auth_allow_insecure_global_id_reclaim false

清除配置:

ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys

用下列命令可以连Ceph 安装包一起清除:

ceph-deploy purge {ceph-node} [{ceph-node}]

注意:如果执行了purge ,你必须重新安装Ceph 。

#显示用户信息
ceph dashboard ac-user-show
#显示角色信息
ceph dashboard ac-role-show
#删除用户
ceph dashboard ac-user-delete admin

相关文章

Ceph RGW及S3接口操作
CephFS 文件系统
Ceph RBD
Ceph osd 命令
Ceph Prometheus监控
Ceph 限额配置

发布评论