Kubernetes日志收集方案 EFK Pod部署

2024-08-20 360 0

EFK介绍

EFK为elasticsearch、fluentd、kibana的简称,本案例主要对kubernetes集群日志收集。

fluentd是一款开源的日志收集工具。优势:

  • 使用 JSON 进行统一日志记录
    • 其尽可能地把数据结构化为JSON,让下游数据处理容易。
  • 可插拔架构
    • 利用插件,允许对其功能扩展
  • 对计算机资源要求少
    • 其使用c语言和ruby结合编写,需要少量系统资源即可运行。
  • 内置可靠性
    • 支持基于内存和文件的缓冲,防止节点间数据丢失
    • 支持强大故障转移并可设置为高可用性

EFK部署

获取EFK资源文件

kubernetes 1.23 以上版本移除fluentd-elasticsearch
https://github.com/kubernetes/kubernetes/tree/release-1.23/cluster/addons/fluentd-elasticsearch

[root@harbor ~]# git clone -b release-1.23 https://github.com/kubernetes/kubernetes.git
[root@harbor ~]# cd kubernetes/cluster/addons/fluentd-elasticsearch/

部署ES

修改配置:注释ClusterIP:None,修改type为:NodePort

[root@harbor fluentd-elasticsearch]# vim es-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  #clusterIP: None # 注释
  ports:
    - name: db
      port: 9200
      protocol: TCP
      targetPort: 9200
    - name: transport
      port: 9300
      protocol: TCP
      targetPort: 9300
  publishNotReadyAddresses: true
  selector:
    k8s-app: elasticsearch-logging
  sessionAffinity: None
  type: NodePort # 修改

部署

[root@harbor fluentd-elasticsearch]# kubectl apply -f es-statefulset.yaml 
[root@harbor fluentd-elasticsearch]# kubectl apply -f es-service.yaml

查看状态

[root@harbor fluentd-elasticsearch]# kubectl get sts -n logging
NAME                    READY   AGE
elasticsearch-logging   2/2     4m35s

[root@harbor fluentd-elasticsearch]# kubectl get pods -n logging
NAME                      READY   STATUS    RESTARTS        AGE
elasticsearch-logging-0   1/1     Running   1 (2m55s ago)   4m37s
elasticsearch-logging-1   1/1     Running   0               2m17s

验收

[root@k8s-master01 ~]# kubectl get svc -n logging
NAME                    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
elasticsearch-logging   NodePort   10.107.43.141   <none>        9200:31688/TCP,9300:32570/TCP   6m32s

# k8s节点
[root@k8s-master01 ~]# curl 10.107.43.141:9200/_cat/health?pretty
1724045254 05:27:34 kubernetes-logging green 2 2 0 0 0 0 0 0 - 100.0%

# 外部节点
[root@harbor fluentd-elasticsearch]# curl 192.168.77.91:31688/_cat/health?pretty
1724045185 05:26:25 kubernetes-logging green 2 2 0 0 0 0 0 0 - 100.0%

部署Fluentd

[root@harbor fluentd-elasticsearch]# kubectl apply -f fluentd-es-configmap.yaml 
[root@harbor fluentd-elasticsearch]# kubectl apply -f fluentd-es-ds.yaml
[root@harbor fluentd-elasticsearch]# kubectl get pods -n logging
NAME                      READY   STATUS    RESTARTS      AGE
elasticsearch-logging-0   1/1     Running   1 (24h ago)   24h
elasticsearch-logging-1   1/1     Running   0             24h
fluentd-es-v3.1.1-52nr5   1/1     Running   0             23h
fluentd-es-v3.1.1-6cp4t   1/1     Running   0             23h
fluentd-es-v3.1.1-zgq5k   1/1     Running   0             23h

部署kafana

[root@harbor fluentd-elasticsearch]# vim kibana-deployment.yaml 
...
    spec:
    # 注释这三行
      #securityContext:
      #  seccompProfile:
      #    type: RuntimeDefault
      containers:
        - name: kibana-logging
          image: docker.elastic.co/kibana/kibana-oss:7.10.2
          resources:
            # need more cpu upon initialization, therefore burstable class
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          env:
            - name: ELASTICSEARCH_HOSTS
              value: http://elasticsearch-logging:9200
            - name: SERVER_NAME
              value: kibana-logging
            # 注释这两行
            #- name: SERVER_BASEPATH
            #  value: /api/v1/namespaces/logging/services/kibana-logging/proxy
            - name: SERVER_REWRITEBASEPATH
              value: "false"
...
[root@harbor fluentd-elasticsearch]# vim kibana-service.yaml 

apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: logging
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
spec:
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana-logging
  type: NodePort # 添加
[root@harbor fluentd-elasticsearch]# kubectl get pod -n logging | grep kibana
kibana-logging-58887f87d4-8xhxs   1/1     Running   0             2m43s

[root@harbor fluentd-elasticsearch]# kubectl get svc -n logging
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
elasticsearch-logging   ClusterIP   None             <none>        9200/TCP,9300/TCP   23h
kibana-logging          NodePort    10.105.236.184   <none>        5601:30333/TCP      2m47s

image.png
image.png
image.png

image.png

相关文章

KubeSphere DevOps 流水线JAVA项目配置
虚拟机热添加内存 Kubernetes未生效
Containerd镜像加速及私有仓库配置(用户密码和忽略HTTPS)
在Kubernetes集群部署kubesphere
使用KubeKey快速部署Kubernetes集群1.28.8
Rancher 快速创建RKE K8S集群

发布评论