当前位置: 首页 > news >正文

kubernetes二进制安装教程单master

前言

kubernetes的二进制部署是比较繁琐的,需要注意的细节非常多,但,二进制部署的好处也是显而易见的,能够对k8s的整体架构有更深的理解,后期的问题排查也会更加的有思路。

k8s集群的部署是需要讲顺序的,你不能先安装一个kube-apiserver,在安装一个kubelet,然后在安装kube-controller-manage,如果这样部署,那么是不会成功的。因此,在安装前,我们需要有一个比较科学的部署规划。

其次,二进制部署集群其实是有一些节点事件的,什么是节点事件?就是在此事情完成后,就可以进入下一个阶段了,而下一个阶段是可选择的多方向的部署。例如:

[root@master cfg]# k get no -A
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   12h   v1.18.3
k8s-node1    Ready    <none>   11h   v1.18.3
k8s-node2    Ready    <none>   11h   v1.18.3

 通常,安装到这的时候,就表示一个k8s集群已经算是建立好了,能用吗?可以用,但,功能是不全的,比如,coredns,这个是没有安装的,在此之前,我们也有网络插件的选择问题。然后在集群内dns安装完毕后,又进入一个新阶段:安装kubernetes的图形化管理控制界面选择,这里有N个选择,也可能选择dashboard,也可能选择kubesphere或者其它的管理界面,如果这一步完成了,又进入了下一步的安装阶段,ingress和dash-ingress的安装。其后,还有master节点的高可用部署,apiserver的高可用部署。这些都做完了,那么,才能说一个完整的可用于生产的k8s集群部署完成了。

安装所需的相关文件:

链接:https://pan.baidu.com/s/1XOeUD2qQYBsVQVfnulS7lA?pwd=k8ss 
提取码:k8ss 

 

一.集群规划

集群规划
序号IP角色Hostname安装组件
1192.168.217.16master,nodek8s-maseter
Apiserver,ControllerManager,Scheduler,Kubelet,Proxy,docker基础环境,etcd
2192.168.217.17nodek8s-node1
Kubelet,Proxy,Etcd,docker基础环境
3192.168.217.18nodek8s-node2Kubelet,Proxy,Etcd,docker基础环境

该集群的安装顺序计划为:

1,ssh免密(全部三台服务器)

2,时间服务器搭建(全部三台服务器)

3,关闭swap(全部三台服务器)

4,升级系统内核到高版本(全部三台服务器)

5,搭建本地仓库(全部三台服务器)

6,docker环境搭建(全部三台服务器)

7,etcd集群搭建(全部三台服务器)

8,kube-apiserver服务配置和安装(仅master节点)

9,kube-controller-manager服务配置和安装(仅master节点)

10,kube-scheduler服务配置和安装(仅master节点)

11,验证查询集群状态---此时是第一个小阶段部署完成

12,kubelet服务配置和安装(node节点,master节点也可安装)

13,kube-proxy服务配置和安装(node节点,master节点也可安装)

14,CNI网络部署---kube-flannel(全部三台服务器)

15,集群节点状态验证查询---此时第二个小阶段部署完成。

二,按以上步骤开始部署

(1)三台服务器之间的ssh免密

ssh-keygen -t rsa
一路回车到底,不用犹豫
ssh-copy-id 192.168.217.16
ssh-copy-id 192.168.217.17
ssh-copy-id 192.168.217.18

三台服务器都执行一遍,假设sshd服务没有更换端口,使用的是默认端口。

(2)时间服务器搭建

请看另一个博文Linux ntp时间服务器的搭建和配置_zsk_john的博客-CSDN博客_linux ntp服务器搭建

(3)swap的关闭

KVM虚拟机管理工作二(虚拟机磁盘优化,Centos进入dracut模式,报 /dev/centos/swap does not exist,如何恢复)_zsk_john的博客-CSDN博客_dracut模式

这里是有误区的,如果是lvm磁盘的话,普通的swap当我没说了,反正看这个博文可以保证无缺陷的卸载swap。

(4)升级内核

Linux centos7升级内核(两种方法:内核编译和yum更新)_zsk_john的博客-CSDN博客_centos升级内核

升级内核是为了集群运行更稳定,如果是低版本内核,可能会出现经常经常性集群宕机,升级到5内核以上就可以了。

[root@master ~]# uname -a
Linux master 5.16.9-1.el7.elrepo.x86_64 #1 SMP PREEMPT Thu Feb 10 10:39:14 EST 2022 x86_64 x86_64 x86_64 GNU/Linux

(5)本地仓库的搭建

Linux的完全本地仓库搭建指南(科普扫盲贴)_zsk_john的博客-CSDN博客_linux创建本地仓库

本地仓库搭建是为了可能会用到的一些依赖安装。

(6)docker环境搭建

利用ansible的角色快速批量一键部署基础docker环境_zsk_john的博客-CSDN博客_ansible批量部署docker应用

我写的这个博文里有ansible和docker一体化安装包,按教程搭建即可。

(7)etcd集群搭建

centos7操作系统 ---ansible剧本离线快速部署etcd集群_zsk_john的博客-CSDN博客_ansible离线部署

这个也是使用ansible搭建的。

这几步可以算作基础环境的搭建,后面的步骤将是k8s的主要核心服务搭建了。

(8)kube-apiserver服务的搭建(master节点)
 

准备服务运行所需要的可执行文件:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

chmod a+x /opt/kubernetes/bin/*
chmod a+x /usr/bin/kubectl

准备服务运行时需要的相关配置文件: 

[root@master cfg]# cat /opt/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.217.16:2379,https://192.168.217.17:2379,https://192.168.217.18:2379 \
--bind-address=192.168.217.16 \
--secure-port=6443 \
--advertise-address=192.168.217.16 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

kube-apiserver.conf 这个文件的配置说明:

注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
--logtostderr:启用日志
---v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志


证书文件的准备(三个文件的准备);

自签证书颁发机构(CA)

[root@master k8s-ssl]# cat ca-config.json 
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
[root@master k8s-ssl]# cat ca-csr.json 
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing","O": "k8s",
"OU": "System"
}
]
}

 生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

这将会生成两个证书文件,ca打头,pem后缀的证书文件。 

使用自签CA签发kube-apiserver HTTPS证书:

[root@master k8s-ssl]# cat server-csr.json 
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.217.16",
"192.168.217.17",
"192.168.217.18",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

总共会生成4个证书文件,这四个文件是以pem为后缀的,将这四个文件拷贝到 /opt/kubernetes/ssl 目录下:

cp server*.pem ca*.pem /opt/kubernetes/ssl/

证书生成的工作就到这告一段落了。下面是启用 TLS Bootstrapping 自签机制。




cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:nodebootstrapper"
EOF

这里的token可以使用下面的命令生成然后替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '



kube-apiserver的启动脚本:

[root@master ssl]# cat /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

服务启动和加入自启:

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

该服务状态为绿色表示正常:

[root@master ssl]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-08-26 15:33:19 CST; 6h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 3009 (kube-apiserver)
   Memory: 365.3M
   CGroup: /system.slice/kube-apiserver.service
           └─3009 /opt/kubernetes/bin/kube-apiserver --v=2 --logtostderr=false --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.217.16:2379,https://1...

Aug 26 15:33:19 master systemd[1]: Started Kubernetes API Server.
Aug 26 15:33:19 master systemd[1]: Starting Kubernetes API Server...
Aug 26 15:33:28 master kube-apiserver[3009]: E0826 15:33:28.034854    3009 controller.go:152] Unable to remove old endpoints from kubernetes service: Sto...ErrorMsg:
Hint: Some lines were ellipsized, use -l to show in full.

如果有错误导致服务未能正常启动,可查看系统日志 /var/log/messages

通过对/var/log/messages日志的观察,可以发现,在第一次启动apiserver的时候,生成了非常多的角色,这些角色对应了k8s内的各种资源。例如:

Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.321342    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.335178    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:discovery
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.346905    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:basic-user
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.359675    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.370449    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/admin
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.381805    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/edit
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.395624    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/view
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.406568    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.415029    6822 healthz.go:200] [+]ping ok
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.516294    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.525808    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.535778    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.545944    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.558356    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.567806    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.577033    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.585929    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.596499    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.605861    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:ku
g 30 10:01:23 master kube-apiserver: I0830 10:01:23.614996    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.624625    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.635380    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.644132    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.653821    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.663108    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.672682    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.685326    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.694401    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.703354    6822 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
Aug 30 10:01:23 master kube-apiserver: I0830 10:01:23.713226    6822 healthz.go:200] [+]ping ok
Aug 30 10:01:23 master kube-apiserver: [+]log ok
Aug 30 10:01:23 master kube-apiserver: [+]etcd ok

Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.123145    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.132424    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.149014    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.160210    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.169018    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.178514    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.187484    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.201137    6822 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
Aug 30 10:01:24 master kube-apiserver: I0830 10:01:24.213896    6822 healthz.go:200] [+]ping ok

(9)部署kube-controller-manager

该服务的配置文件:

[root@master cfg]# cat /opt/kubernetes/cfg/kube-controller-manager.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"

配置文件说明:

--master:通过本地非安全本地端口8080连接apiserver。
--leader-elect:当该组件启动多个时,自动选举(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致,也就是两个服务共用ca证书。

该服务的启动脚本:

[root@master cfg]# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

启动并设置开机启动:

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

(10)部署kube-scheduler

这个服务是调度服务,主要调度各类资源的,通过和controller-manage服务通信,以及etcd通知进行各类资源调度。

配置文件:

[root@master cfg]# cat /opt/kubernetes/cfg/kube-scheduler.conf 
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"

配置文件说明:

--master:通过本地非安全本地端口8080连接apiserver。
--leader-elect:当该组件启动多个时,自动选举(HA)

启动脚本:

[root@master cfg]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

启动并设置开机启动:

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

(11)

此时,这三个服务搭建完毕后,就可以集群的健康检查了:

[root@master cfg]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

如果哪个服务没有启动或者异常,此命令都会显示出来。例如,停止一个etcd,上面的命令将会报告错误:

[root@master cfg]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                             ERROR
scheduler            Healthy     ok                                                                                                  
etcd-1               Unhealthy   Get https://192.168.217.17:2379/health: dial tcp 192.168.217.17:2379: connect: connection refused   
controller-manager   Healthy     ok                                                                                                  
etcd-0               Healthy     {"health":"true"}                                                                                   
etcd-2               Healthy     {"health":"true"}   

(12)node节点安装kubelet

kubelet服务是node工作节点比较重要的一个服务,这个服务也不太好配置:

kubelet服务的配置文件:

[root@master cfg]# cat /opt/kubernetes/cfg/kubelet.conf 
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"

导入相关镜像包,包名是registry.cn-hangzhou.aliyuncs.com_google_containers_pause_3.2.tar,三个节点都导入

配置文件说明:

--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像

这里要注意一个难点,-kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig这一段,表示这个文件会在服务启动的时候自动生成,但一般稍微有点错,它就生成不了,比如下面的文件如果有写错,那么,将不会自动生成这个文件。

[root@master cfg]# cat /opt/kubernetes/cfg/kubelet-config.yml 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
  - 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

生成bootstrap.kubeconfig文件:

KUBE_APISERVER="https://192.168.217.16:6443"
TOKEN="c47ffb939f5ca36231d9e3121a252940"

集群名称的定义在下面这个文件内,这里定义的名称是kubernetes: 

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

定义的用户名称是kubelet-bootstrap,这个用户需要授予admin权限。 

kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig

生成config文件,这个文件非常重要,在执行命令的当前目录生成,如果不是在/opt/kubernetes/cfs目录下执行的此命令,需要copy这个文件到前述目录内 

kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

权限授予:

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=cluster-admin  --user=kubelet-bootstrap

授权apiserver访问kubelet :

[root@master ~]# cat apiserver-to-kubelet-rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
    - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes

执行这个文件:

kubectl apply -f apiserver-to-kubelet-rbac.yaml

拷贝文件到配置文件存放目录:

cp bootstrap.kubeconfig /opt/kubernetes/cfg

kubelet服务的启动脚本:

[root@master cfg]# cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

删除一下自动生成的文件: 

rm -rf /usr/lib/systemd/system/kubelet.service.d

启动并设置开机启动:

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

批准kubelet证书申请并加入集群:

# 查看kubelet证书请求
kubectl get csr
NAME AGE SIGNERNAME
REQUESTOR CONDITION
node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A 6m3s kubernetes.io/kubeapiserver-client-kubelet kubelet-bootstrap Pending
# 批准申请
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A

此时在kubectl get csr 状态将变成 approve,issued表示申请通过。

[root@master cfg]# k get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-u3XGZBQ_M8SKt60J5jCIH7enAbRtKRsbW8LgBM8XsRQ   24m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

此时查看node节点,可以看到一个notready的节点:

[root@master cfg]# k get no
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   63s   v1.18.3

(13)

部署kube-proxy

服务配置文件:

[root@master cfg]# cat kube-proxy.conf 
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

配置参数文件:

[root@master cfg]# cat kube-proxy-config.yml 
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24

生成kube-proxy.kubeconfig文件

[root@master cfg]# cat ~/k8s-ssl/kube-proxy-csr.json 
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -
profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

复制证书:

cp kube-proxy-key.pem kube-proxy.pem /opt/kubernetes/ssl/

生成kubeconfig文件:

KUBE_APISERVER="https://192.168.217.16:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

服务启动脚本:

[root@master bin]# cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

启动并设置开机启动:

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

(14)

工作节点部署(在master节点的基础上部署)

拷贝相关文件到node1节点(配置文件,证书文件和可执行文件以及服务启动脚本,这里是演示的node1节点,node2节点一样的操作啦):

先在node1服务器上建立相关文件夹:

mkdir -p /opt/kubernetes/{cfg,bin,ssl,logs}
scp /opt/kubernetes/bin/{kubelet,kube-proxy}   k8s-node1:/opt/kubernetes/bin/

scp /usr/lib/systemd/system/{kubelet.service,}   k8s-node1:/usr/lib/systemd/system/

scp /opt/kubernetes/cfg/{kubelet.conf,kube-proxy.conf,kube-proxy-config.yml,kubelet-config.yml,bootstrap.kubeconfig,kube-proxy.kubeconfig}  k8s-node1:/opt/kubernetes/cfg/

scp /opt/kubernetes/ssl/{ca-key.pem,ca.pem,kubelet.crt,kube-proxy-key.pem,kube-proxy.pem,server-key.pem,server.pem}  k8s-node1:/opt/kubernetes/ssl/

两个文件修改修改主机名:

kube-proxy-config.yml和kubelet.conf 这两个文件里的--hostname-override=的值修改为当前主机名,比如,是在node1服务器上修改,就写k8s-node1

剩下的都不需要改动,启动服务就好了,服务状态检查,看成功后,在master服务器上批准节点加入就可以啦(例如,node1节点的批准):

[root@master ssl]# k get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U   51s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
[root@master ssl]# kubectl certificate approve node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U
certificatesigningrequest.certificates.k8s.io/node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U approved

在master上查看结果,验证是否正常:

[root@master cfg]# k get no
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   <none>   4h40m   v1.18.3
k8s-node1    NotReady   <none>   34m     v1.18.3
k8s-node2    NotReady   <none>   33m     v1.18.3


此时节点状态是没有准备,原因如下(cni插件配置没有初始化):

Aug 27 15:58:56 master kubelet: E0827 15:58:56.093236   14623 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

15,

安装网络插件

导入docker镜像包quay.io_coreos_flannel_v0.13.0.tar,所有节点都导入,也就是docker load  <quay.io_coreos_flannel_v0.13.0.tar

[root@master cfg]# cat ../bin/kube-flannel.yml 
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

kubectl apply -f kube-flannel.yaml  执行上面的文件就安装好了网络插件。

(15)COREDNS的安装

总共5个配置文件,其中一个是测试dns的文件:

[root@master coredns]# cat coredns-sa.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
[root@master coredns]# cat coredns-cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        kubernetes cluster.local 10.254.0.0/18
        forward . /etc/resolv.conf
        cache 30
    }
[root@master coredns]# cat coredns-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
        imagePullPolicy: IfNotPresent
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
[root@master coredns]# cat coredns-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 10.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

test-dns.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox:1.28.3
        imagePullPolicy: IfNotPresent
        args:
        - /bin/sh
        - -c
        - sleep 10; touch /tmp/healthy; sleep 30000

(16)

dashboard的安装

kubernetesui_metrics-scraper_v1.0.6.tar和dashboard.tar这两个网盘内的文件三节点都使用docker load 导入,因为是deployment方式部署,不清楚到底是在哪个节点部署。

集群角色绑定

kubectl create clusterrolebinding default --clusterrole=cluster-admin --serviceaccount=kube-system:default --namespace=kube-system

安装部署yaml文件内容如下(文件内容比较多,执行这个文件  kubectl apply -f dashboard.yml);

[root@master ~]# cat dashboard.yml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30008
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

获取登录token:

kubectl describe secrets $(kubectl describe sa default -n kube-system | grep Mountable | awk 'NR == 2 {next} {print $3}') -n kube-system

登录地址:https://nodeip:30008

那么,使用配置文件登录呢?那个文件就是bootstrap.kubeconfig,将这个文件拷贝到桌面,然后登录 的时候选择文件即可了,文件内容应该如下:

[root@master ~]# cat /opt/kubernetes/cfg/bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVRDYzSGpYeFRiM3EzdGZUeEM4QjZwalUzYUVRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl5TURneU56QXhNVFV3TUZvWERUSTNNRGd5TmpBeE1UVXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeitLb3pMQlVEYXNQTmxKc2lMSXoKZHRUR0M4a2tvNnVJZjVUSUNkY3pPemJyaks1TVJ4UzYrd2ZwVzNHOGFtN2J1QlcvRE1hcW15dGNlbEwxd0VpMwoxUGZTNE9oWXRlczUwWU4wMkZrZ2JCYmVBSVN3NnJ5cnRadGFxdWhZeHUwQjlOLzVuZGhETUx2ZFhFV1NYYWZrCmtWQXNnTFZ0dmNPMCtKVUt3OGE5eFJSRTkyWThZYXZ0azN4M3VBU2hTejUrS3FQZ1V6Q2x2a2N4UUpXVFBiTkUKOEpERXlaY0I0ay8za0NuOGtsREc3Um9Wck1hcHJ6Z3lNQkNVOEgzS1hsM0FJdkFYNGVQQTFOVGJzbUhWaDdCcgpmeWdLT0x5RHA3OUswbkp1c0NtY1JmRGJ4TWJMVCtNeU01Y0NFcm1LMkVnSTRuYXIyMndQQU5kemRTb1dIbDljCkp3SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVZG1mUXNkMy85czVoKzl0V1dDMHhBL1htSENZd0h3WURWUjBqQkJnd0ZvQVVkbWZRc2QzLwo5czVoKzl0V1dDMHhBL1htSENZd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJR0lPa0xDMlpxL3Y4dStNelVyCkpVRUY4SFV2QzdldkdXV3ZMMmlqd3dxTnJMWE1XYWd6UWJxcDM5UTVlRnRxdngzSWEveFZFZ0ZTTnBSRjNMNGYKN0VYUlYxRXpJVlUxeThaSnZzVXl1aFQyVENCQ3UwMkdvSWc1VGlJNzMwak5xQllMRGh6SVJKLzBadEtTQlcwaApIUEo3eGRybnZSdnY3WG9uT1dCbldBTUhJY0N0TzNLYlovbXY1VHBoTnkzWHJMSTdRaFVvWVlQSXN5N1BvUjhVCm9WVm80RkRRUDFPYXhGSzljSE1DNWNuLzFNSnRZUGpVRzg5RldEc01HbWVVZVZ1cnhsVStkVlFUMUZzOWJxanoKaDJWaHNtanFCK3RCbjVGdENOaEY5STVxYlJuMWJmTGRpQzl2QzJ3U00xSDZQVWRxeHB6ZlRaVHhSbEptdmtjZAo1ZTQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.217.16:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: c47ffb939f5ca36231d9e3121a252940

两个地方需要特别注意,一个是token,这个是token.csv文件里定义的,一个是kubelet-bootstrap用户,可能使用config文件会登录失败,提示权限不足,

那么,就先删除用户的角色绑定,重新绑定为admin,代码如下:

kubectl delete clusterrolebindings kubelet-bootstrap
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=cluster-admin  --user=kubelet-bootstrap

相关文章:

  • LeetCode-998. 最大二叉树 II【最大二叉树】
  • 如何快速使用proteus【硬件课程设计】
  • 从零开始手写一个Transformer
  • java基于springboot+Vue图片分享社区网站
  • Appium环境搭建及元素定位
  • 神经网络算法处理器设计,神经网络是机器算法吗
  • 爱上开源之golang入门至实战第四章函数(Func)(九)
  • Ubuntu安装可视化界面ElasticSearch-head插件
  • 【数论:组合数学】排列组合
  • kickstarter/indiegogo海外众筹六大核心
  • 做库存功能业务场景详解,S2B2B系统助力建筑建材企业精准掌握库存动态
  • 操作系统——内存管理例题
  • 自动化测试框架Pytest(九)——任务管理
  • java计算机毕业设计贵州省高考本科志愿填报指导系统源码+数据库+系统+lw文档+mybatis+运行部署
  • h5ad文件前后端可视化探索
  • 【JavaScript】通过闭包创建具有私有属性的实例对象
  • 【跃迁之路】【641天】程序员高效学习方法论探索系列(实验阶段398-2018.11.14)...
  • GDB 调试 Mysql 实战(三)优先队列排序算法中的行记录长度统计是怎么来的(上)...
  • in typeof instanceof ===这些运算符有什么作用
  • js继承的实现方法
  • Mithril.js 入门介绍
  • Promise面试题2实现异步串行执行
  • spring-boot List转Page
  • Terraform入门 - 3. 变更基础设施
  • 从零开始在ubuntu上搭建node开发环境
  • 机器学习学习笔记一
  • 可能是历史上最全的CC0版权可以免费商用的图片网站
  • 如何进阶一名有竞争力的程序员?
  • 如何实现 font-size 的响应式
  • 实现菜单下拉伸展折叠效果demo
  • 使用docker-compose进行多节点部署
  • 手机app有了短信验证码还有没必要有图片验证码?
  • 因为阿里,他们成了“杭漂”
  • ionic异常记录
  • # include “ “ 和 # include < >两者的区别
  • ###51单片机学习(2)-----如何通过C语言运用延时函数设计LED流水灯
  • #{} 和 ${}区别
  • #gStore-weekly | gStore最新版本1.0之三角形计数函数的使用
  • #我与Java虚拟机的故事#连载01:人在JVM,身不由己
  • (aiohttp-asyncio-FFmpeg-Docker-SRS)实现异步摄像头转码服务器
  • (Matlab)基于蝙蝠算法实现电力系统经济调度
  • (python)数据结构---字典
  • (Redis使用系列) Springboot 使用redis的List数据结构实现简单的排队功能场景 九
  • (zz)子曾经曰过:先有司,赦小过,举贤才
  • (八)Flask之app.route装饰器函数的参数
  • (动手学习深度学习)第13章 计算机视觉---图像增广与微调
  • (翻译)Quartz官方教程——第一课:Quartz入门
  • (附源码)ssm高校社团管理系统 毕业设计 234162
  • (七)理解angular中的module和injector,即依赖注入
  • (求助)用傲游上csdn博客时标签栏和网址栏一直显示袁萌 的头像
  • (收藏)Git和Repo扫盲——如何取得Android源代码
  • (已解决)什么是vue导航守卫
  • .net core 实现redis分片_基于 Redis 的分布式任务调度框架 earth-frost
  • .net php 通信,flash与asp/php/asp.net通信的方法
  • .net websocket 获取http登录的用户_如何解密浏览器的登录密码?获取浏览器内用户信息?...