当前位置: 首页 > news >正文

【云原生 | Kubernetes 系列】----使用Prometheus监控K8s集群

使用Prometheus监控K8s集群

1. daemonset方式部署node-exporter

将node的/proc,/sys,/分别隐射到node-exporter中,这样就能监控node的状态

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring 
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
        k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      containers:
      - image: harbor.intra.com/prometheus/node-exporter:v1.3.1
        imagePullPolicy: IfNotPresent
        name: prometheus-node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          protocol: TCP
          name: metrics
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        - mountPath: /host/sys
          name: sys
        - mountPath: /host
          name: rootfs
        args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --path.rootfs=/host
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /
      hostNetwork: true
      hostPID: true
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: monitoring 
spec:
  type: NodePort
  ports:
  - name: http
    port: 9100
    nodePort: 39100
    protocol: TCP
  selector:
    k8s-app: node-exporter

2. 部署node-exporter

先创建namespace,再执行yaml

kubectl create ns monitoring
kubectl apply -f case2-daemonset-deploy-node-exporter.yaml

此时node-exporter已经以daemonset的方式跑在每个节点之上

root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case# kubectl get pods -n monitoring -o wide
NAME                  READY   STATUS    RESTARTS   AGE    IP               NODE             NOMINATED NODE   READINESS GATES
node-exporter-gmkmh   1/1     Running   0          4m8s   192.168.31.113   192.168.31.113   <none>           <none>
node-exporter-hjd4c   1/1     Running   0          4m8s   192.168.31.102   192.168.31.102   <none>           <none>
node-exporter-mg72x   1/1     Running   0          4m8s   192.168.31.101   192.168.31.101   <none>           <none>
node-exporter-vvhtw   1/1     Running   0          4m8s   192.168.31.112   192.168.31.112   <none>           <none>
node-exporter-wxkw9   1/1     Running   0          4m8s   192.168.31.111   192.168.31.111   <none>           <none>
node-exporter-z4w6t   1/1     Running   0          4m8s   192.168.31.103   192.168.31.103   <none>           <none>
node-exporter-zk6c2   1/1     Running   0          4m8s   192.168.31.114   192.168.31.114   <none>           <none>

3. 创建Prometheus configMap

prometheus-cfg.yaml

---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus-config
  namespace: monitoring 
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      scrape_timeout: 10s
      evaluation_interval: 1m
    scrape_configs:
    - job_name: 'kubernetes-node'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    - job_name: 'kubernetes-node-cadvisor'
      kubernetes_sd_configs:
      - role:  node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name



    - job_name: 'kubernetes-apiserver'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https

部署configMap

root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case# kubectl apply -f prometheus-cfg.yaml
configmap/prometheus-config created

在113服务器上创建目录用作prometheus-storage-volume

root@k8s-node-3:~# mkdir -p /data/prometheusdata
chmod 777 /data/prometheusdata

创建监控账号

root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case# kubectl create serviceaccount monitor -n monitoring
serviceaccount/monitor created

授权账号可以服务发现

root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case# kubectl create clusterrolebinding monitor-clusterrolebinding -n monitoring --clusterrole=cluster-admin --serviceaccount=monitoring:monitor
clusterrolebinding.rbac.authorization.k8s.io/monitor-clusterrolebinding created

4. 创建Prometheus deployment

prometheus-deployment.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitoring
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
      component: server
    #matchExpressions:
    #- {key: app, operator: In, values: [prometheus]}
    #- {key: component, operator: In, values: [server]}
  template:
    metadata:
      labels:
        app: prometheus
        component: server
      annotations:
        prometheus.io/scrape: 'false'
    spec:
      nodeName: 192.168.31.113
      serviceAccountName: monitor
      containers:
      - name: prometheus
        image: harbor.intra.com/prometheus/prometheus:v2.32.1
        imagePullPolicy: IfNotPresent
        command:
          - prometheus
          - --config.file=/etc/prometheus/prometheus.yml
          - --storage.tsdb.path=/prometheus
          - --storage.tsdb.retention=720h
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/prometheus/prometheus.yml
          name: prometheus-config
          subPath: prometheus.yml
        - mountPath: /prometheus/
          name: prometheus-storage-volume
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-config
            items:
              - key: prometheus.yml
                path: prometheus.yml
                mode: 0644
        - name: prometheus-storage-volume
          hostPath:
           path: /data/prometheusdata
           type: Directory

部署deployment

root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case# kubectl apply -f prometheus-deployment.yaml
deployment.apps/prometheus-server created

5. 创建Prometheus Service

暴露服务器30090端口,映射给prometheus

---
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    app: prometheus
spec:
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 30090
      protocol: TCP
  selector:
    app: prometheus
    component: server

查看svc暴露的端口,及后端服务器

root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case1/prometheus-files/case# kubectl get svc -n monitoring 
NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
node-exporter   NodePort   10.200.150.142   <none>        9100:39100/TCP   11m
prometheus      NodePort   10.200.241.145   <none>        9090:30090/TCP   4m1s
root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case1/prometheus-files/case# kubectl get ep -n monitoring 
NAME            ENDPOINTS                                                                 AGE
node-exporter   192.168.31.101:9100,192.168.31.102:9100,192.168.31.103:9100 + 4 more...   11m
prometheus      172.100.76.132:9090                                                       4m27s

此时已经可以从svc提供的nodeport上访问到k8s采集到的数据
请添加图片描述

6. cAdvisor部署

cAdvisor不仅可以采集一台服务器上所有运行的容器信息,还提供基础查询界面和http接口,方便其他组件抓取数据.
https://github.com/google/cadvisor

6.1 cadvisor 镜像docker部署

VERSION=v0.36.0
sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8080:8080 \
  --detach=true \
  --name=cadvisor \
  --privileged \
  --device=/dev/kmsg \
  gcr.io/cadvisor/cadvisor:$VERSION

6.2 cAdvisor k8s中部署

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cadvisor
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: cAdvisor
  template:
    metadata:
      labels:
        app: cAdvisor
    spec:
      tolerations:    #污点容忍,忽略master的NoSchedule
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      hostNetwork: true
      restartPolicy: Always   # 重启策略
      containers:
      - name: cadvisor
        image: harbor.intra.com/prometheus/cadvisor:v0.45.0
        imagePullPolicy: IfNotPresent  # 镜像策略
        ports:
        - containerPort: 8080
        volumeMounts:
          - name: root
            mountPath: /rootfs
          - name: run
            mountPath: /var/run
          - name: sys
            mountPath: /sys
          - name: docker
            mountPath: /var/lib/docker
      volumes:
      - name: root
        hostPath:
          path: /
      - name: run
        hostPath:
          path: /var/run
      - name: sys
        hostPath:
          path: /sys
      - name: docker
        hostPath:
          path: /var/lib/docker

部署cadvisor

root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case1/prometheus-files/case# kubectl apply -f case1-daemonset-deploy-cadvisor.yaml 
daemonset.apps/cadvisor created
root@k8s-master-01:/opt/k8s-data/yaml/prometheus-files/case1/prometheus-files/case# kubectl get pods -n monitoring 
NAME                                 READY   STATUS    RESTARTS   AGE
cadvisor-5hgfg                       1/1     Running   0          7s
cadvisor-8f4dz                       1/1     Running   0          7s
cadvisor-8l7tx                       1/1     Running   0          7s
cadvisor-crnnf                       1/1     Running   0          7s
cadvisor-js4lx                       1/1     Running   0          7s
cadvisor-vznfs                       1/1     Running   0          7s
cadvisor-x2rlb                       1/1     Running   0          7s
node-exporter-4q2k9                  1/1     Running   0          46m
node-exporter-fn464                  1/1     Running   0          46m
node-exporter-fz5d5                  1/1     Running   0          46m
node-exporter-jd27l                  1/1     Running   0          46m
node-exporter-s8gdn                  1/1     Running   0          46m
node-exporter-sfsvj                  1/1     Running   0          46m
node-exporter-t5tlr                  1/1     Running   0          46m
prometheus-server-74c8d6675f-85d4t   1/1     Running   0          40m

此时通过服务器的8080可以访问到cAdvisor的页面

请添加图片描述

访问xxx:8080/metrics可以读取到采集的元数据

请添加图片描述

6.3 将cAdvisor采集的数据加入prometheus

6.3.1 二进prometheus配置

编辑/apps/prometheus/prometheus.yml

  - job_name: "prometheus-pods"
    static_configs:
      - targets: ["192.168.31.101:8080","192.168.31.102:8080","192.168.31.103:8080","192.168.31.111:8080","192.168.31.112:8080","192.168.31.113:8080","192.168.31.114:8080"]

重启服务

root@prometheus-2:/apps/prometheus# systemctl restart prometheus.service 

重启之后就能在prometheus中看到k8s的pod数据

请添加图片描述

6.3.2 导入grafana

14282

请添加图片描述

6.4 将cAdvisor导入k8s prometheus

6.4.1 k8s prometheus配置

由于k8s的prometheus的configMap中配置了这段自动发现

    - job_name: 'kubernetes-node-cadvisor'
      kubernetes_sd_configs:
      - role:  node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

在添加完cadvisor后prometheus中已经采集到了相关数据
请添加图片描述

6.4.2 导入grafana

14282

请添加图片描述

相关文章:

  • python基础(四、循环语句)
  • 视觉slam14讲 ——ch2实践部分
  • Android几种定时任务实现方式汇总
  • 【数据结构】哈夫曼编码与最优二叉树(哈夫曼树)
  • C++获取系统毫秒级时间(自1970年1月1日至今的毫秒数)
  • redis的详解和项目应用之PHP操作总结
  • 阿里、滴滴、华为等一线互联网分布式消息中间件:RocketMQ核心笔记
  • PostgreSQL的学习心得和知识总结(六十四)|关于PostgreSQL数据库 图式搜索(graph search)及递归查询 的场景说明
  • AI智能安防监控视频播放卡顿的原因排查与分析
  • 荧光染料Cy7 酰肼,Cy7 hydrazide,Cy7 HZ参数及结构式解析
  • OSPF——DR和BDR讲解
  • es的安装
  • 【SpringBoot】SpringBoot 读取配置文件中的自定义属性的 5 种方法
  • 前端的(typeScript)interface详解(个人学习用)
  • Android Studio应用基础,手把手教你从入门到精通(小白学习)总结2 之 常用界面布局和ListView
  • [iOS]Core Data浅析一 -- 启用Core Data
  • 0基础学习移动端适配
  • Android单元测试 - 几个重要问题
  • JavaScript的使用你知道几种?(上)
  • Python进阶细节
  • spring cloud gateway 源码解析(4)跨域问题处理
  • SQLServer之创建数据库快照
  • 翻译--Thinking in React
  • 面试题:给你个id,去拿到name,多叉树遍历
  • 七牛云假注销小指南
  • 前端技术周刊 2018-12-10:前端自动化测试
  • JavaScript 新语法详解:Class 的私有属性与私有方法 ...
  • ​Spring Boot 分片上传文件
  • ​第20课 在Android Native开发中加入新的C++类
  • (day 12)JavaScript学习笔记(数组3)
  • (day6) 319. 灯泡开关
  • (delphi11最新学习资料) Object Pascal 学习笔记---第8章第2节(共同的基类)
  • (Java数据结构)ArrayList
  • (二)WCF的Binding模型
  • (强烈推荐)移动端音视频从零到上手(下)
  • (三) diretfbrc详解
  • (十)【Jmeter】线程(Threads(Users))之jp@gc - Stepping Thread Group (deprecated)
  • (四)Android布局类型(线性布局LinearLayout)
  • (原創) 如何刪除Windows Live Writer留在本機的文章? (Web) (Windows Live Writer)
  • (转)es进行聚合操作时提示Fielddata is disabled on text fields by default
  • ***检测工具之RKHunter AIDE
  • *1 计算机基础和操作系统基础及几大协议
  • .gitattributes 文件
  • .net core 微服务_.NET Core 3.0中用 Code-First 方式创建 gRPC 服务与客户端
  • .Net 代码性能 - (1)
  • .Net 中的反射(动态创建类型实例) - Part.4(转自http://www.tracefact.net/CLR-and-Framework/Reflection-Part4.aspx)...
  • .NET3.5下用Lambda简化跨线程访问窗体控件,避免繁复的delegate,Invoke(转)
  • .net开发引用程序集提示没有强名称的解决办法
  • .NET命名规范和开发约定
  • .pyc文件是什么?
  • /etc/apt/sources.list 和 /etc/apt/sources.list.d
  • @Responsebody与@RequestBody
  • @拔赤:Web前端开发十日谈
  • [ 代码审计篇 ] 代码审计案例详解(一) SQL注入代码审计案例
  • [.net]官方水晶报表的使用以演示下载