Deploy HiveMQ using the HiveMQ Operator in an AKS cluster, and set up a load balancer with a static public IP.
This article walks through the step by step guide how to setup Load balancer with static public IP, when hivemq is deployed using hivemq operator in AKS cluster
Instructions
The following steps will guide you in configuring HiveMQ using the obtained IP.
Deploy AKS cluster
Create Application Gateway
Log in to the Azure portal at https://portal.azure.com.
Click on the "Create a resource" link.
Search for "Application Gateway" in the search box and select "Application Gateway."
Choose the appropriate resource group for your deployment i.e. “Application Gateway for containers “.
Provide a name and region for the application gateway.
Click the "Next" button.
Create a frontend with the name "CC."
Create an association with the name "test."
Click the "Review and create" button.
Review the configurations and click the "Create" button.
Wait for the deployment to complete. This will create the application gateway along with a static public IP.
Associate Public IP with Resources
Go to the resource group where the application gateway is deployed.
Look for the resource type "Public IP" in the list of resources and click it for next configuration.
Click the "Associate IP" button.
Select the appropriate resource to associate it with and click "Save."
Copy the static public IP address for use in the next steps.
Note: At this point, you have successfully deployed an AKS cluster and created an application gateway with a static public IP.
Deploy hivemq using the following
values.yaml
global: rbac: create: true # Create a PodSecurityPolicy, cluster role, role binding and service account for the HiveMQ pods and assign the service account to them. pspEnabled: false monitoring: enabled: true dedicated: true operator: admissionWebhooks: # Enable the admission hook enabled: false kube-prometheus-stack: grafana: service: type: LoadBalancer annotations: nginx.ingress.kubernetes.io/affinity: "cookie" service.beta.kubernetes.io/azure-load-balancer-ipv4: "<this should be your publich ip>" hivemq: nodeCount: "2" cpu: "2" memory: "2Gi" env: - name: HIVEMQ_CONTROL_CENTER_USER value: "admin" - name: HIVEMQ_CONTROL_CENTER_PASSWORD value: "8f5c77e8dc7879871efaf8578049d8fcacbba549a6c66d6c5525748e4305446d" ports: - name: "mqtt" port: 1883 expose: false patch: - '[{"op":"add","path":"/spec/selector/hivemq.com~1node-offline","value":"false"},{"op":"add","path":"/metadata/annotations","value":{"service.spec.externalTrafficPolicy":"Local"}}]' # If you want Kubernetes to expose the MQTT port to external traffic # - '[{"op":"add","path":"/spec/type","value":"LoadBalancer"}]' - name: "cc" port: 8080 expose: false patch: - '[{"op":"add","path":"/spec/sessionAffinity","value":"ClientIP"}]'
helm upgrade --install -f values.yaml hivemq-2406 hivemq/hivemq-operator -n hivemq
verify if the pods are runnings as expected
now deploy hivemq service with a type load balancer.
kind: Service apiVersion: v1 metadata: name: hivemq-service annotations: nginx.ingress.kubernetes.io/affinity: "cookie" service.beta.kubernetes.io/azure-load-balancer-ipv4: "<this is your public ip>" labels: app: hivemq hivemq-cluster: hivemq-2406 spec: sessionAffinity: "ClientIP" selector: hivemq-cluster: hivemq-2406 ports: - name: mqtt protocol: TCP port: 1883 targetPort: 1883 - name: cc protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer