网站弹窗页面是谁做的,网页传奇打金,网站开发需求书模板,雄安建设集团 网站一、ceph介绍 操作系统需要内核版本在kernel 3.10或CentOS7以上版本中部署通过deploy工具安装简化部署过程#xff0c;本文中选用的ceph-deploy版本为1.5.39至少准备6个环境#xff0c;分别为1个ceph-admin管理节点、3个mon/mgr/mds节点、2个osd节点二、ceph安装 1. 部署ceph…一、ceph介绍 操作系统需要内核版本在kernel 3.10或CentOS7以上版本中部署通过deploy工具安装简化部署过程本文中选用的ceph-deploy版本为1.5.39至少准备6个环境分别为1个ceph-admin管理节点、3个mon/mgr/mds节点、2个osd节点二、ceph安装 1. 部署ceph-admin a) 配置主机名配置hosts文件。shell hostnamectl --static set-hostname shyt-ceph-admin
shell cat /etc/hosts
10.52.0.181 shyt-ceph-mon1
10.52.0.182 shyt-ceph-mon2
10.52.0.183 shyt-ceph-mon3
10.52.0.201 shyt-ceph-osd-node1
10.52.0.202 shyt-ceph-osd-node2 b) 生成ssh key文件并复制到各个节点shell ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:TvZDQwvZpIKFAeSyh8Y1QhEOG9EzKaHaNN1rMl8kxfI rootshyt-ceph-admin
The keys randomart image is:
---[RSA 2048]----
|Oo.o... . |
|*..... |
|o o o |
|o*o.. Eo . |
|oo o o S |
|.. o . |
| . . o |
| . |
| |
----[SHA256]-----shell ssh-copy-id shyt-ceph-mon1
shell ssh-copy-id shyt-ceph-mon2
shell ssh-copy-id shyt-ceph-mon3
shell ssh-copy-id shyt-ceph-osd-node1
shell ssh-copy-id shyt-ceph-osd-node2 c) 安装ceph-deploy# 修改本地yum源
shell wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell yum clean all
shell yum makecacheshell yum -y install https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-deploy-1.5.39-0.noarch.rpm
shell ceph-deploy --version
1.5.39 d) 创建部署目录shell mkdir deploy_ceph_cluster cd deploy_ceph_cluster 2. 部署mon/mgr/mds节点 a) 配置主机名shell hostnamectl --static set-hostname shyt-ceph-mon1 b) 修改yum源shell wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell yum clean all
shell yum makecache c) 创建Ceph Monitor节点在ceph-admin中执行# 生成ceph配置文件、monitor秘钥文件以及部署日志文件。
shell ceph-deploy new shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3 d) 在ceph.conf配置中增加以下信息注释版详见附件shell cat /etc/ceph/ceph.conf
[global]osd pool default size 3osd pool default min size 1public network 10.52.0.0/24cluster network 10.52.0.0/24cephx require signatures truecephx cluster require signatures truecephx service require signatures truecephx sign messages true[mon]mon data size warn 15*1024*1024*1024mon data avail warn 30mon data avail crit 10# 由于ceph集群中存在异构PC导致时钟偏移总是大于默认0.05s为了方便同步直接把时钟偏移设置成0.5smon clock drift allowed 2mon clock drift warn backoff 30mon allow pool delete truemon osd allow primary affinity true[osd]osd journal size 10000osd mkfs type xfsosd max write size 512osd client message size cap 2147483648osd deep scrub stride 131072osd op threads 16osd disk threads 4osd map cache size 1024osd map cache bl size 128#osd mount options xfs rw,noexec,nodev,noatime,nodiratime,nobarrierosd recovery op priority 5osd recovery max active 10osd max backfills 4osd min pg log entries 30000osd max pg log entries 100000osd mon heartbeat interval 40ms dispatch throttle bytes 148576000objecter inflight ops 819200osd op log threshold 50osd crush chooseleaf type 0filestore xattr use omap truefilestore min sync interval 10filestore max sync interval 15filestore queue max ops 25000filestore queue max bytes 1048576000filestore queue committing max ops 50000filestore queue committing max bytes 10485760000filestore split multiple 8filestore merge threshold 40filestore fd cache size 1024filestore op threads 32journal max write bytes 1073714824journal max write entries 10000journal queue max ops 50000journal queue max bytes 10485760000[mds]debug ms 1/5[client]rbd cache truerbd cache size 335544320rbd cache max dirty 134217728rbd cache max dirty age 30rbd cache writethrough until flush falserbd cache max dirty object 2rbd cache target dirty 235544320 e) 安装ceph软件包shell ceph-deploy install shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3 \
--release mimic \
--repo-url http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/ \
--gpg-url http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc f) 配置初始monitor、并收集所有密钥shell ceph-deploy mon create-initial g) 分发配置文件# 通过ceph-deploy将配置文件以及密钥拷贝至其他节点使得不需要指定mon地址以及用户信息就可以直接管理我们的ceph集群
shell ceph-deploy admin shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3 h配置mgr# 运行ceph health打印
# HEALTH_WARN no active mgr
# 自从ceph 12开始manager是必须的应该为每个运行monitor的机器添加一个mgr否则集群处于WARN状态。
shell ceph-deploy mgr create shyt-ceph-mon1:cephsvr-16101 shyt-ceph-mon2:cephsvr-16102 shyt-ceph-mon3:cephsvr-16103# 提示当ceph-mgr发生故障相当于整个ceph集群都会出现严重问题
# 建议在每个mon中都创建独立的ceph-mgr(至少3个ceph mon节点)只需要在每个mon节点参考上面的方法进行创建即可(每个mgr需要不同的独立命名)。 # 关闭ceph-mgr的方式
shell systemctl stop ceph-mgrcephsvr-16101 3. 部署osd节点 a) 配置主机名shell hostnamectl --static set-hostname shyt-ceph-osd-node1 b) 修改yum源shell wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell yum clean all
shell yum makecache c) 安装ceph软件包shell ceph-deploy install shyt-ceph-osd-node1 shyt-ceph-osd-node2 \
--release mimic \
--repo-url http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/ \
--gpg-url http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc d) 配置osd节点shell ceph-deploy disk zap shyt-ceph-osd-node1:sdb shyt-ceph-osd-node1:sdc shyt-ceph-osd-node1:sdd
shell ceph-deploy osd create shyt-ceph-osd-node1:sdb shyt-ceph-osd-node1:sdc shyt-ceph-osd-node1:sdd e) 分发配置文件shell ceph-deploy admin shyt-ceph-osd-node1 shyt-ceph-osd-node2# 查看ceph osd节点状态
shell ceph -s
shell ceph osd tree 三、启用Dashboard 在任意节点中执行开启dashboard支持# 启用dashboard插件
shell ceph mgr module enable dashboard
# 生成自签名证书
shell ceph dashboard create-self-signed-cert
Self-signed certificate created
# 配置dashboard监听IP和端口
shell ceph config set mgr mgr/dashboard/server_port 8080
# 配置dashboard认证
shell ceph dashboard set-login-credentials root 123456
Username and password updated
# 关闭SSL支持只用HTTP的方式访问
shell ceph config set mgr mgr/dashboard/ssl false
# 每个mon节点重启dashboard使配置生效
shell systemctl restart ceph-mgr.target
# 浏览器访问 http://10.52.0.181:8080# 查看ceph-mgr服务
shell ceph mgr services
{dashboard: http://shyt-ceph-mon1:8080/
} 四、创建Ceph MDS角色 1. 安装ceph mds # 为防止单点故障需要部署多台MDS节点
shell ceph-deploy mds create shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3 2、手动创建data和metadata池 shell ceph osd pool create data 128 128
shell ceph osd pool create metadata 128 128
shell ceph fs new cephfs metadata data
shell ceph mds stat
cephfs-1/1/1 up {0shyt-ceph-mon3up:active}, 2 up:standby 3、挂载cephfs文件系统 shell wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell cat /etc/yum.repos.d/ceph.repo EOF
[ceph]
nameCeph packages for $basearch
baseurlhttp://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled1
gpgcheck1
priority1
typerpm-md
gpgkeyhttp://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc[ceph-noarch]
nameCeph noarch packages
baseurlhttp://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled1
gpgcheck1
priority1
typerpm-md
gpgkeyhttp://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc[ceph-source]
nameCeph source packages
baseurlhttp://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled0
gpgcheck1
typerpm-md
gpgkeyhttp://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority1EOFshell yum clean all
shell yum makecache
shell yum -y install https://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/ceph-fuse-13.2.5-0.el7.x86_64.rpm
# 创建ceph目录将ceph.client.admin.keyring和ceph.conf文件拷贝到该目录下。
shell mkdir /etc/ceph/
# 创建挂载目录
shell mkdir /storage
shell ceph-fuse /storage
# 加入开机启动项
shell echo ceph-fuse /storage /etc/rc.d/rc.local 转载于:https://www.cnblogs.com/91donkey/p/10938488.html