Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

When deploying the HiveMQ Platform on a Kubernetes cluster, it is crucial to ensure high availability and fault tolerance. One way to achieve this is by configuring pod anti-affinity rules, which guide the Kubernetes scheduler to distribute HiveMQ Platform pods across different nodes. This prevents multiple pods from being placed on the same node, reducing the risk of a single point of failure.

This article explains how to configure pod anti-affinity for the HiveMQ Platform to enhance its resilience.

\uD83D\uDCD8 Instructions

Below is an example of a pod anti-affinity rule that you can add to your HiveMQ Platform's Kubernetes deployment configuration (a.k.a. values.yaml):

podScheduling:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                    - hivemq-platform
            topologyKey: "kubernetes.io/hostname"

Explanation of the Configuration

  • podScheduling: Encapsulates the scheduling rules for HiveMQ Platform pods.

  • affinity: This section contains both affinity and anti-affinity rules. The specific configuration here is an anti-affinity rule.

  • podAntiAffinity: Ensures that pods do not get scheduled together on the same node, promoting their distribution across different nodes.

  • preferredDuringSchedulingIgnoredDuringExecution: Indicates a "soft" preference. Kubernetes will try to distribute pods according to this rule but will still schedule them even if the rule cannot be met.

  • weight: 100: Represents the strength of the preference. A value of 100 is the highest possible, meaning Kubernetes will strongly prefer following this rule.

  • podAffinityTerm:

    • labelSelector: Specifies the pods to which the rule applies. In this case, it matches pods labeled with app.kubernetes.io/name=hivemq-platform.

    • topologyKey: "kubernetes.io/hostname": The rule applies across nodes (defined by their hostname). Kubernetes will try to avoid placing multiple HiveMQ Platform pods on the same node.

Benefits of Pod Anti-Affinity

  • Increased Availability: By spreading the HiveMQ Platform pods across different nodes, the likelihood of all pods being affected by a single node failure is reduced.

  • Improved Fault Tolerance: If a node goes down, only the pods on that node are affected, while others continue to operate from different nodes.

  • Efficient Resource Utilization: This configuration helps ensure that the workload is evenly distributed across the cluster, preventing resource contention.

Applying the Configuration

  1. Locate HiveMQ Platform Helm Chart’s values.yaml: Open the values YAML file where your HiveMQ Platform deployment is configured.

  2. Insert the Anti-Affinity Rule: Add the provided pod anti-affinity configuration under the podScheduling section of your deployment YAML.

  3. Deploy Changes: Apply the changes using the updated values.yaml and the following command:

    helm upgrade broker --install hivemq/hivemq-platform --values values.yaml

    Here “broker” is the release name of an example HiveMQ Platform. If you use a different name for the HiveMQ Platform release in your environment then update the command respectfully.

  4. Verify Deployment: Confirm that the pods are being distributed across different nodes:

    kubectl get pods -o wide

    Check the NODE column to see if the HiveMQ Platform pods are scheduled on different nodes.

Conclusion

Configuring pod anti-affinity for the HiveMQ Platform is a simple yet effective way to enhance the resilience of your deployment on Kubernetes. By ensuring that pods are spread across different nodes, you can achieve better availability and fault tolerance, safeguarding your HiveMQ Platform against potential node failures.

For further assistance, please contact HiveMQ Support.

  • No labels