Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents

Assumptions And Prerequisites

This guide assumes that:

...

You have a Microsoft Azure account with an active subscription. If you don’t, create a new account for free.

...

You have installed az, the Microsoft Azure command-line interface (CLI). In case you haven’t, install it using these instructions.

...

kubectl v1.29.x

...

Helm v3.x

...

AKS cluster with Kubernetes API version >= 1.25

...

PV provisioner support in the underlying infrastructure

...

HiveMQ v.4.2x.x broker cluster installed to namespace hivemq using hivemq/hivemq-operator Helm chart

...

Prequisite: A running HiveMQ Cluster Install HiveMQ on the AKS cluster

If you are not logged in, please use the following commands to log in to your azure cluster, please replace the group and name as needed. (Our reference : https://hivemq.atlassian.net/wiki/spaces/HMS/pages/2691203114/Setting+up+AKS+Cluster+in+Azure#Set-Up-Your-Kubernetes-Cluster-With-AKS)

Code Block
az login
Code Block
az aks get-credentials -g hmqResourceGroup -n HiveMQCluster

Install Kafka using helm

  1. Create a namespace for Kafka and switch the context to it:

    Code Block
    languagebash
    kubectl create namespace kafka; 
    kubectl config set-context --current --namespace=kafka 
  2. Add the repository for the Kafka Helm chart to your package manager.

    Code Block
    languagebash
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo update
  3. Deploy the Kafka server using the Helm chart. The below command deploys Kafka with 2 brokers (replicas).

    Code Block
    languagebash
    helm upgrade --install kafka bitnami/kafka --namespace=kafka --set replicaCount=2 
    1. If everything is correct, then

      1. Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

        kafka.kafka.svc.cluster.local

      2. The CLIENT listener for Kafka client connections from within your cluster has been configured with the following security settings:

        • SASL authentication

      3. To connect a client to your Kafka:

        1. username="user1"

        2. To get the password execute the command below:

          Code Block
          languagebash
          kubectl get secret kafka-user-passwords --namespace kafka \
            -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1;

...

  1. HiveMQ Enterprise Extension For Kafka requires a separate license file, e.g. kafka-license.elic, in the $HIVEMQ_HOME/license directory. You can skip this step. If you skip this step, then the kafka-extension will start in trial mode, limited to 5h, and will be automatically disabled by the HiveMQ broker after 5h. To add the kafka-license.elic along with the hivemq-license.lic, create a new configmap hivemq-license including all desired license files:

    Code Block
    languagebash
    kubectl create configmap hivemq-license --namespace=hivemq \
      --from-file hivemq-licesen.lic \
      --from-file kafka-license.elic
  2. Edit the values.yaml file of the hivemq-operator, section hivemq.configMaps. Update this:

    Code Block
      configMaps: []
      # ConfigMaps to mount to the HiveMQ pods. These can be mounted to existing directories without shadowing the folder contents as well.
      #- name: hivemq-license
      #  path: /opt/hivemq/license

    To this:

    Code Block
      configMaps: 
        - name: hivemq-license
          path: /opt/hivemq/license

    This will mount the content of the configMap hivemq-license to the directory /opt/hivemq/license of the hivemq-broker pods.

  3. HiveMQ Enterprise Extension For Kafka is preinstalled with HiveMQ so once you enable it, it will look for its configuration file. You must prepare this file before enabling the extension. If you skip this step, the extension will not find its configuration file and will not load any configuration.

  4. Prepare a simple configuration file for kafka-extension as in the example below.

    • this example configuration will map all incoming MQTT publish packets to the topic “test” in Kafka; and will map the topic “test” in Kafka to the topic “test-test” in the HiveMQ broker

    • Use your password in <password>here_is_your_password</password>, that you successfully retrieved with this command a few steps ago:

      Code Block
      languagebash
      kubectl get secret kafka-user-passwords --namespace kafka \
        -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1;
    • Here is the file:

...