十、mongodb分片集群运维相关
一、k8s集群部署分片集群
1、规划:
角色 | IP地址 | 端口 |
---|---|---|
cluster01-router01 | 192.168.86.21 | 27017 |
cluster01-router02 | 192.168.86.22 | 27017 |
cluster01-router03 | 192.168.86.23 | 27017 |
cluster01-configsvr01 | 192.168.86.21 | 27018 |
cluster01-configsvr02 | 192.168.86.22 | 27018 |
cluster01-configsvr03 | 192.168.86.23 | 27018 |
shard01-shardsvr01 | 192.168.86.21 | 27019 |
shard01-shardsvr02 | 192.168.86.22 | 27019 |
shard01-shardsvr03 | 192.168.86.23 | 27019 |
shard02-shardsvr01 | 192.168.86.24 | 27019 |
shard02-shardsvr02 | 192.168.86.25 | 27019 |
shard02-shardsvr03 | 192.168.86.26 | 27019 |
2、测试步骤:
1> 部署1个3副本集群,有密码认证并写入数据
2> 副本集集群扩展为单个shard集群通过shard方式管理集群
3> 部署第2个副本集群并添加到shard集群内进行数据均衡
3、分别在3个节点部署部署3副本集群
1> cat /etc/kubernetes/manifests/shard01-shardsvr01.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: mongodb
name: shard01-shardsvr01
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mongo
image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
command:
- mongod
- --replSet
- shard01
- --shardsvr
- --wiredTigerCacheSizeGB=2
- --bind_ip_all
- --port=27019
resources:
limits:
memory: 2Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 500m
volumeMounts:
- name: data
mountPath: /data/db
hostNetwork: true
volumes:
- name: data
hostPath:
path: /data/shard01-shardsvr
2> cat /etc/kubernetes/manifests/shard01-shardsvr02.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: mongodb
name: shard01-shardsvr02
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mongo
image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
command:
- mongod
- --replSet
- shard01
- --shardsvr
- --wiredTigerCacheSizeGB=2
- --bind_ip_all
- --port=27019
resources:
limits:
memory: 2Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 500m
volumeMounts:
- name: data
mountPath: /data/db
hostNetwork: true
volumes:
- name: data
hostPath:
path: /data/shard01-shardsvr
3> cat /etc/kubernetes/manifests/shard01-shardsvr03.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: mongodb
name: shard01-shardsvr03
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mongo
image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
command:
- mongod
- --replSet
- shard01
- --shardsvr
- --wiredTigerCacheSizeGB=2
- --bind_ip_all
- --port=27019
resources:
limits:
memory: 2Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 500m
volumeMounts:
- name: data
mountPath: /data/db
hostNetwork: true
volumes:
- name: data
hostPath:
path: /data/shard01-shardsvr
4> 初始化集群
安装mongo客户端
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.14.tgz -P /opt/
tar -xf /opt/mongodb-linux-x86_64-rhel70-4.2.14.tgz -C /opt/
export PATH=$PATH:/opt/mongodb-linux-x86_64-rhel70-4.2.14/bin/
初始化mongo集群
mongo 192.168.86.21:27019
use admin
rs.initiate({ _id: "shard01", members: [ { _id: 0, host : "192.168.86.21:27019" } ] } )
rs.add('192.168.86.22:27019')
rs.add('192.168.86.23:27019')
创建管理员用户
mongo 192.168.86.21:27019
use admin
db = db.getSiblingDB("admin");db.createUser({user:"root",pwd:"rootPassw0rd",roles:["root"]});
添加密码认证
openssl rand -base64 745 >> key
chmod 600 key
scp key root@common01:/data/shard01-shardsvr/
scp key root@common02:/data/shard01-shardsvr/
scp key root@common03:/data/shard01-shardsvr/
#各个节点修改yaml添加keyfile配置
sed -i '/port/a\ - --keyFile=/data/db/key' /etc/kubernetes/manifests/shard01-*.yaml
尝试登陆验证密码
mongo mongodb://root:rootPassw0rd@192.168.86.21:27019,192.168.86.22:27019,192.168.86.23:27019/admin?replicaSet=shard01
插入数据
use duanshuaixing-mongodb
for(i=1; i<=5000;i++){
db.user.insert( {id:'user'+i, level:i} )
}
4、副本集集群扩展为单个shard集群通过shard方式管理集群
1>部署3节点configsvr集群
部署(启动参数中指定了角色为configsrv)
cat cluster01-configsvr0{1,2,3}.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: mongodb
name: cluster01-configsvr0{1,2,3}
spec:
terminationGracePeriodSeconds: 60
containers:
- name: configsvr
image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
command:
- mongod
- --replSet
- configsvr
- --configsvr
- --wiredTigerCacheSizeGB=2
- --bind_ip_all
- --port=27018
resources:
limits:
memory: 2Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 500m
volumeMounts:
- name: data
mountPath: /data/configdb
hostNetwork: true
volumes:
- name: data
hostPath:
path: /data/cluster01-configsvr
初始化mongo集群
mongo 192.168.86.21:27018
use admin
rs.initiate({ _id: "configsvr", members: [ { _id: 0, host : "192.168.86.21:27018" } ] } )
rs.add('192.168.86.22:27018')
rs.add('192.168.86.23:27018')
创建管理员用户
mongo 192.168.86.21:27018
use admin
db = db.getSiblingDB("admin");db.createUser({user:"root",pwd:"rootPassw0rd",roles:["root"]});
#拷贝副本集的key并在各个节点修改yaml添加keyfile配置
cp -a /data/shard01-shardsvr/key /data/cluster01-configsvr/
sed -i '/port/a\ - --keyFile=/data/configdb/key' /etc/kubernetes/manifests/cluster01-configsvr*.yaml
尝试登陆验证密码
mongo mongodb://root:rootPassw0rd@192.168.86.21:27018,192.168.86.22:27018,192.168.86.23:27018/admin?replicaSet=configsvr
2>部署单节点router集群
router说明
mongodb中的router角色只负责提供一个入口,不存储任何的数据, 配置多个router,任何一个都能正常的获取数据
router最重要的配置,指定configsvr的地址,使用副本集id+ip端口的方式指定
cat /etc/kubernetes/manifests/cluster01-router01.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: mongodb
name: cluster01-router01
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mongo
image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
command:
- mongos
- --configdb
- configsvr/192.168.86.21:27018,192.168.86.22:27018,192.168.86.23:27018
- --bind_ip_all
- --port=27017
- --keyFile=/data/db/key
resources:
limits:
memory: 2Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 500m
volumeMounts:
- name: data
mountPath: /data/db
hostNetwork: true
volumes:
- name: data
hostPath:
path: /data/cluster01-router
3>分片集群配置
mongo 192.168.86.21:27017(鉴权用的是configsvr集群密码)
use admin
sh.addShard("shard01/192.168.86.21:27019,192.168.86.22:27019,192.168.86.23:27019")
sh.status()
5、扩展第2个分片进行数据均衡
三、分片集群管理
1、常用命令
1>列出分片信息
use admin
db.runCommand( { listshards : 1 } )
或者
use config
db.shards.find()
2>整体分片查看
sh.status()
3>