Deploy HiveMQ using the HiveMQ Operator in an AKS cluster, and set up a load balancer with a static public IP.

This article walks through the step by step guide how to setup Load balancer with static public IP, when hivemq is deployed using hivemq operator in AKS cluster

 Instructions

The following steps will guide you in configuring HiveMQ using the obtained IP.

  1. Deploy AKS cluster

  2. Create Application Gateway

    1. Log in to the Azure portal at https://portal.azure.com.

    2. Click on the "Create a resource" link.

    3. Search for "Application Gateway" in the search box and select "Application Gateway."

    4. Choose the appropriate resource group for your deployment i.e. “Application Gateway for containers “.

    5. Provide a name and region for the application gateway.

    6. Click the "Next" button.

    7. Create a frontend with the name "CC."

    8. Create an association with the name "test."

    9. Click the "Review and create" button.

    10. Review the configurations and click the "Create" button.

    11. Wait for the deployment to complete. This will create the application gateway along with a static public IP.

  3. Associate Public IP with Resources

    1. Go to the resource group where the application gateway is deployed.

    2. Look for the resource type "Public IP" in the list of resources and click it for next configuration.

    3. Click the "Associate IP" button.

    4. Select the appropriate resource to associate it with and click "Save."

    5. Copy the static public IP address for use in the next steps.

  4. Note: At this point, you have successfully deployed an AKS cluster and created an application gateway with a static public IP.

  5. Deploy hivemq using the following values.yaml

    1. global: rbac: create: true # Create a PodSecurityPolicy, cluster role, role binding and service account for the HiveMQ pods and assign the service account to them. pspEnabled: false monitoring: enabled: true dedicated: true operator: admissionWebhooks: # Enable the admission hook enabled: false kube-prometheus-stack: grafana: service: type: LoadBalancer annotations: nginx.ingress.kubernetes.io/affinity: "cookie" service.beta.kubernetes.io/azure-load-balancer-ipv4: "<this should be your publich ip>" hivemq: nodeCount: "2" cpu: "2" memory: "2Gi" env: - name: HIVEMQ_CONTROL_CENTER_USER value: "admin" - name: HIVEMQ_CONTROL_CENTER_PASSWORD value: "8f5c77e8dc7879871efaf8578049d8fcacbba549a6c66d6c5525748e4305446d" ports: - name: "mqtt" port: 1883 expose: false patch: - '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]' # If you want Kubernetes to expose the MQTT port to external traffic # - '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]' - name: "cc" port: 8080 expose: false patch: - '[{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]'
    2. helm upgrade --install -f values.yaml hivemq-2406 hivemq/hivemq-operator -n hivemq
    3. verify if the pods are runnings as expected

    4. now deploy hivemq service with a type load balancer.

    5. kind: Service apiVersion: v1 metadata: name: hivemq-service annotations: nginx.ingress.kubernetes.io/affinity: "cookie" service.beta.kubernetes.io/azure-load-balancer-ipv4: "<this is your public ip>" labels: app: hivemq hivemq-cluster: hivemq-2406 spec: sessionAffinity: "ClientIP" selector: hivemq-cluster: hivemq-2406 ports: - name: mqtt protocol: TCP port: 1883 targetPort: 1883 - name: cc protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer