kube-promethus 监控Rabbitmq

2024-08-15 409 0

添加仓库

[root@harbor monitoring]# helm repo add bitnami https://charts.bitnami.com/bitnami
[root@harbor monitoring]# helm repo update
[root@harbor ~]# helm search repo rabbitmq --versions
bitnami/rabbitmq                                    14.6.5          3.13.6      RabbitMQ is an open source general-purpose mess...
bitnami/rabbitmq                                    12.0.0          3.12.0      RabbitMQ is an open source general-purpose mess...
bitnami/rabbitmq                                    11.16.2         3.11.18     RabbitMQ is an open source general-purpose mess...
bitnami/rabbitmq                                    11.16.1         3.11.17     RabbitMQ is an open source general-purpose mess...
...

bitnami/rabbitmq 14.6.5 这个版本健康检测有问题,这里指定11.16.2版本

[root@harbor monitoring]# helm pull bitnami/rabbitmq --version=11.16.2
[root@harbor monitoring]# tar xf rabbitmq-11.16.2.tgz 
[root@harbor monitoring]# cd rabbitmq

helm rabbitmq配置

  • 配置持久化存储、副本数等
  • 建议首次部署时直接修改values中的配置,而不是用–set的方式,这样后期upgrade不必重复设置。

    设置管理员密码

方案一:在values.yaml文件中指定

[root@harbor rabbitmq]# vim values.yaml 

# 127行
auth:
  ## @param auth.username RabbitMQ application username
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/rabbitmq#environment-variables
  ##
  username: admin # 修改
  ## @param auth.password RabbitMQ application password
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/rabbitmq#environment-variables
  ##
  password: "admin@mq" # 修改
  ## @param auth.securePassword Whether to set the RabbitMQ password securely. This is incompatible with loading external RabbitMQ definitions and 'true' when not setting the auth.password parameter.
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/rabbitmq#environment-variables
  ##
  securePassword: true
  ## @param auth.existingPasswordSecret Existing secret with RabbitMQ credentials (existing secret must contain a value for `rabbitmq-password` key or override with setting auth.existingSecretPasswordKey)
  ## e.g:
  ## existingPasswordSecret: name-of-existing-secret
  ##
  existingPasswordSecret: ""
  ## @param auth.existingSecretPasswordKey [default: rabbitmq-password] Password key to be retrieved from existing secret
  ## NOTE: ignored unless `auth.existingSecret` parameter is set
  ##
  existingSecretPasswordKey: ""
  ## @param auth.enableLoopbackUser If enabled, the user `auth.username` can only connect from localhost
  ##
  enableLoopbackUser: false
  ## @param auth.erlangCookie Erlang cookie to determine whether different nodes are allowed to communicate with each other
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/rabbitmq#environment-variables
  ##
  erlangCookie: "secretcookie" # 修改

方案二:在命令行执行时通过--set直接配置

--set auth.username=admin,auth.password=admin@mq,auth.erlangCookie=secretcookie

配置rabbitmq强制启动

当rabbitmq启用持久化存储时,若rabbitmq所有pod同时宕机,将无法重新启动,因此有必要提前开启clustering.forceBoot

# 243行
clustering:
  ## @param clustering.enabled Enable RabbitMQ clustering
  ##
  enabled: true
  ## @param clustering.name RabbitMQ cluster name
  ## If not set, a name is generated using the common.names.fullname template
  ##
  name: ""
  ## @param clustering.addressType Switch clustering mode. Either `ip` or `hostname`
  ##
  addressType: hostname
  ## @param clustering.rebalance Rebalance master for queues in cluster when new replica is created
  ## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance
  ##
  rebalance: false
  ## @param clustering.forceBoot Force boot of an unexpectedly shut down cluster (in an unexpected order).
  ## forceBoot executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an unknown order
  ## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot
  ##
  forceBoot: true # 修改

配置时区

# 298行
extraEnvVars:
  - name: TZ
    value: "Asia/Shanghai"

指定副本数

# 631行
replicaCount: 3 # 默认1修改为3

设置持久化存储

# 901行
persistence:
  ## @param persistence.enabled Enable RabbitMQ data persistence using PVC
  ##
  enabled: true
  ## @param persistence.storageClass PVC Storage Class for RabbitMQ data volume
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "nfs-client" # 若无默认storageclass 则需指定
  ## @param persistence.selector Selector to match an existing Persistent Volume
  ## selector:
  ##   matchLabels:
  ##     app: my-app
  ##
  selector: {}
  ## @param persistence.accessModes PVC Access Modes for RabbitMQ data volume
  ##
  accessModes:
    - ReadWriteOnce
  ## @param persistence.existingClaim Provide an existing PersistentVolumeClaims
  ## The value is evaluated as a template
  ## So, for example, the name can depend on .Release or .Chart
  ##
  existingClaim: ""
  ## @param persistence.mountPath The path the volume will be mounted at
  ## Note: useful when using custom RabbitMQ images
  ##
  mountPath: /opt/bitnami/rabbitmq/.rabbitmq/mnesia
  ## @param persistence.subPath The subdirectory of the volume to mount to
  ## Useful in dev environments and one PV for multiple services
  ##
  subPath: ""
  ## @param persistence.size PVC Storage Request for RabbitMQ data volume
  ## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well
  ##
  size: 8Gi 

service说明

  • 默认通过ClusterIP暴露5672(amqp)和15672(web管理界面)等端口供集群内部使用,也可在外部访问,后面有说明
  • 不建议在values中直接配置nodeport,不方便后期灵活配置

部署rabbitmq

[root@harbor rabbitmq]# kubectl create namespace test
namespace/test created
[root@harbor rabbitmq]# helm install rabbitmq ./ -f values.yaml -n test
kubec   NAME: rabbitmq
LAST DEPLOYED: Tue Aug 13 02:01:48 2024
NAMESPACE: test
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: rabbitmq
CHART VERSION: 14.6.5
APP VERSION: 3.13.6** Please be patient while the chart is being deployed **

Credentials:
    echo "Username      : admin"
    echo "Password      : $(kubectl get secret --namespace test rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 -d)"
    echo "ErLang Cookie : $(kubectl get secret --namespace test rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 -d)"

Note that the credentials are saved in persistent volume claims and will not be changed upon upgrade or reinstallation unless the persistent volume claim has been deleted. If this is not the first installation of this chart, the credentials may not be valid.
This is applicable when no passwords are set and therefore the random password is autogenerated. In case of using a fixed password, you should specify it when upgrading.
More information about the credentials may be found at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases.

RabbitMQ can be accessed within the cluster on port 5672 at rabbitmq.test.svc.cluster.local

To access for outside the cluster, perform the following steps:

To Access the RabbitMQ AMQP port:

    echo "URL : amqp://127.0.0.1:5672/"
    kubectl port-forward --namespace test svc/rabbitmq 5672:5672

To Access the RabbitMQ Management interface:

    echo "URL : http://127.0.0.1:15672/"
    kubectl port-forward --namespace test svc/rabbitmq 15672:15672

WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
  - resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

若使用方式二

[root@harbor rabbitmq]# helm install rabbitmq ./ -f values.yaml -n test --set auth.username=admin,auth.password=admin@mq,auth.erlangCookie=secretcookie
[root@harbor rabbitmq]# helm list -n test
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS      CHART               APP VERSION
rabbitmq    test        1           2024-08-14 13:54:50.266317226 -0400 EDT deployed    rabbitmq-11.16.2    3.11.18

[root@harbor rabbitmq]# kubectl get sts -n test
NAME       READY   AGE
rabbitmq   3/3     11h

[root@harbor rabbitmq]# kubectl get pod -n test
NAME         READY   STATUS    RESTARTS   AGE
rabbitmq-0   1/1     Running   0          11h
rabbitmq-1   1/1     Running   0          11h
rabbitmq-2   1/1     Running   0          11h

[root@harbor rabbitmq]# kubectl get svc -n test
NAME                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                 AGE
rabbitmq            ClusterIP   10.96.93.81   <none>        5672/TCP,4369/TCP,25672/TCP,15672/TCP   11h
rabbitmq-headless   ClusterIP   None          <none>        4369/TCP,5672/TCP,25672/TCP,15672/TCP   11h

查看集群状态

[root@harbor rabbitmq]# kubectl exec -it rabbitmq-0 -n test -- bash
I have no name!@rabbitmq-0:/$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbitmq-0.rabbitmq-headless.test.svc.cluster.local ...
Basics

Cluster name: rabbit@rabbitmq-0.rabbitmq-headless.test.svc.cluster.local
Total CPU cores available cluster-wide: 12

查看策略(默认没有列出镜像模式)

I have no name!@rabbitmq-0:/$ rabbitmqctl list_policies
Listing policies for vhost "/" ...

Ingress配置RabbitMQ Dashboard外部访问

[root@harbor rabbitmq]# cat rabbitmq-dashboard-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rabbitmq-dashboard                   #自定义ingress名称
  namespace: test
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: rabbitmq-dashboard.sundayhk.com                   # 自定义域名
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: rabbitmq     # 对应上面创建的service名称
            port:
              number: 15672
[root@harbor rabbitmq]# kubectl apply -f rabbitmq-dashboard-ingress.yaml 

配置host解析 ingress pod 任意节点的IP

[root@harbor rabbitmq]# kubectl get pod -n ingress-nginx -owide
NAME                             READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
ingress-nginx-controller-n7l8m   1/1     Running   0          14d   192.168.77.92   k8s-master02   <none>           <none>
ingress-nginx-controller-nrk4q   1/1     Running   0          14d   192.168.77.91   k8s-master01   <none>           <none>
ingress-nginx-controller-t47ph   1/1     Running   0          14d   192.168.77.93   k8s-master03   <none>           <none>

这里使用SwitchHosts 配置

192.168.77.91 rabbitmq-dashboard.sundayhk.com

访问 http://rabbitmq-dashboard.sundayhk.com
账号密码为values.yaml 配置的 admin:admin@mq

image.png

image.png

配置镜像模式(高可用)

镜像模式:将需要消费的队列变为镜像队列,存在于多个节点,这样就可以实现 RabbitMQ 的 HA 高可用性。作用就是消息实体会主动在镜像节点之间实现同步,而不是像普通模式那样,在 consumer 消费数据时临时读取。缺点就是,集群内部的同步通讯会占用大量的网络带宽。

[root@harbor rabbitmq]# kubectl exec -it rabbitmq-0 -n test -- bash
I have no name!@rabbitmq-0:/$ rabbitmqctl list_policies
Listing policies for vhost "/" ...

# 配置所有队列为镜像模式
I have no name!@rabbitmq-0:/$ rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'

I have no name!@rabbitmq-0:/$ rabbitmqctl list_policies
Listing policies for vhost "/" ...
vhost   name    pattern apply-to    definition  priority
/   ha-all  ^   all {"ha-mode":"all","ha-sync-mode":"automatic"}    0

image.png

rabbitmq exporter部署

rabbitmq连接信息

url:http://rabbitmq.test.svc.cluster.local:15672
username: admin
password: admin@mq

连通解析测试

[root@k8s-master01 ~]# dig -t a rabbitmq.test.svc.cluster.local @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> -t a rabbitmq.test.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21272
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 634cadcb4ea944f0 (echoed)
;; QUESTION SECTION:
;rabbitmq.test.svc.cluster.local. IN    A

;; ANSWER SECTION:
rabbitmq.test.svc.cluster.local. 5 IN   A   10.96.93.81

;; Query time: 4 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Thu Aug 15 01:50:02 EDT 2024
;; MSG SIZE  rcvd: 119
[root@harbor rabbitmq]# helm search repo rabbitmq-exporter
NAME                                                CHART VERSION   APP VERSION DESCRIPTION                             
prometheus-community/prometheus-rabbitmq-exporter   1.12.1          v0.29.0     Rabbitmq metrics exporter for prometheus
[root@harbor rabbitmq]# helm pull prometheus-community/prometheus-rabbitmq-exporter
[root@harbor rabbitmq]# tar xf prometheus-rabbitmq-exporter-1.12.1.tgz 
[root@harbor rabbitmq]# cd prometheus-rabbitmq-exporter
[root@harbor prometheus-rabbitmq-exporter]# vim values.yaml 

# 37行
loglevel: info
rabbitmq:
  url: http://rabbitmq.test.svc.cluster.local:15672 # 修改
  user: admin # 修改
  password: admin@mq # 修改
  # If existingUserSecret is set then user is ignored
  existingUserSecret: ~
  existingUserSecretKey: username
  # If existingPasswordSecret is set then password is ignored
  existingPasswordSecret: ~
  existingPasswordSecretKey: password
  capabilities: bert,no_sort
  include_queues: ".*"
  include_vhost: ".*"
  skip_queues: "^$"
  skip_verify: "false"
  skip_vhost: "^$"
  exporters: "exchange,node,overview,queue"
  output_format: "TTY"
  timeout: 30
  max_queues: 0
  excludeMetrics: ""
  connection: "direct"
  # Enables overriding env vars using an external ConfigMap.
  configMapOverrideReference: ""

## Additional labels to set in the Deployment object. Together with standard labels from
## the chart
additionalLabels: {}

podLabels: {}

annotations: # 去掉{}和下面三行#
  prometheus.io/scrape: "true"
  prometheus.io/path: "/metrics"
  prometheus.io/port: "9419" # 注 加上""

# Additional Environment variables
env: []
  # - name: GOMAXPROCS
  #   valueFrom:
  #     resourceFieldRef:
  #       resource: limits.cpu

prometheus:
  monitor:
    enabled: true # 启用
    additionalLabels: {}
    interval: 15s
    namespace: []
    metricRelabelings: []
    relabelings: []
    targetLabels: []
[root@harbor prometheus-rabbitmq-exporter]# helm install rabbitmq-exporter ./ -f values.yaml -n monitoring
NAME: rabbitmq-exporter
LAST DEPLOYED: Thu Aug 15 02:10:20 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus-rabbitmq-exporter,release=rabbitmq-exporter" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward $POD_NAME 8080:9419
  echo "Visit http://127.0.0.1:8080 to use your application"

[root@harbor prometheus-rabbitmq-exporter]# kubectl get pod -n monitoring | grep rabbitmq-exporter
rabbitmq-exporter-prometheus-rabbitmq-exporter-54cdc64bc6-5qjmf   1/1     Running   0          51s

image.png

image.png

Grafana

模板ID
RabbitMQ Monitoring4279

RabbitMQ Metrics4371

image.png

image.png

报错

[root@harbor prometheus-rabbitmq-exporter]# helm install rabbitmq-exporter ./ -f values.yaml  -n monitoring
Error: INSTALLATION FAILED: 1 error occurred:
    * Deployment in version "v1" cannot be handled as a Deployment: json: cannot unmarshal number into Go struct field ObjectMeta.spec.template.metadata.annotations of type string

values.yaml
prometheus.io/port: 9419 修改为 prometheus.io/port: "9419"

相关文章

KubeSphere DevOps 流水线JAVA项目配置
虚拟机热添加内存 Kubernetes未生效
Containerd镜像加速及私有仓库配置(用户密码和忽略HTTPS)
在Kubernetes集群部署kubesphere
使用KubeKey快速部署Kubernetes集群1.28.8
Rancher 快速创建RKE K8S集群

发布评论