Setting Up Enterprise Kafka Extension with HiveMQ Platform Operator

This guide provides detailed steps for configuring the Enterprise Kafka Extension with the HiveMQ Platform Operator. Ensure you meet the specified prerequisites before proceeding.

Prerequisites:

  1. Helm version v3+

  2. Running Kubernetes cluster version 1.18.0 or higher

  3. kubectl latest version

 Instructions

  1. Setup Kafka cluster: (This step is optional if you already have a running Kafka cluster).

    1. Create a namespace for Kafka and switch the context to it:

      kubectl create namespace kafka;
      kubectl config set-context --current --namespace=kafka
    2. Add the repository for the Kafka Helm chart to your package manager.

      helm repo add bitnami https://charts.bitnami.com/bitnami
    3. Deploy the Kafka server using the Helm chart.

      1. The below command deploys Kafka with 2 brokers (replicas).

    4. Please notice the output of the command above, it provides critical data that is used for the next steps

      1. Consumers can access Kafka via port 9092 on the following DNS name from within your cluster: kafka.kafka.svc.cluster.local

      2. The CLIENT listener for Kafka client connections from within your cluster has been configured with the following security settings: SASL authentication

      3. To connect a client to your Kafka:

        1. username="user1"

        2. To get the password execute the command below:(skip % at the end)

  2. Deploy HiveMQ with Kafka extension

    1. Create a namespace for HiveMQ installation and switch the context to it:

    2. Generate hivemq_values.yaml:

      Deploy HiveMQ using the HiveMQ Platform and generate the hivemq_values.yaml file:

    3. Configure Kafka extension License:

      Follow the specific steps outlined for configuring the HiveMQ and Kafka extension license. Setting Up HiveMQ License for Your HiveMQ Cluster using HiveMQ Platform Operator

    4. Create config.xml for Kafka extension:

      1. Examples of the config.xml file are in the extension folder under conf/examples.

      2. Configure the username and password for successful authentication to your Kafka cluster.

        1. If you have followed our guide to deploy Kafka cluster then use the username as “user1” and password you copied from the step.

      3. Configure the <mqtt-to-kafka-mappings> or <kafka-to-mqtt-mappings> or <kafka-to-mqtt-transformers> or <mqtt-to-kafka-transformers> based on your need.

      4. Please refer to the example:

    5. Create ConfigMap for Kafka configuration

    6. Create ConfigMap for Kafka transformers, If you are using any transformers then create Configmap to mount these transformer jar files to the extension’s customizations folder.
      (Skip this step if you are not using any transformer)

    7. Deploy HiveMQ Platform Operator:

    8. Edit hivemq_values.yaml: Modify the hivemq_values.yaml file to enable Kafka extension

      1. Configure the ConfigMap name created in the previous step. Set enabled: true to enable this extension.

      2. if you are using kafka transformers, then to mount them add the following configuration block to hivemq platform values.yaml.(Skip this step if you are not using any transformer)

      3. Deploy HiveMQ:

      4. Check Pod Status:

        Verify that all hivemq pods are running.

      5. Verify Enterprise Kafka Extension Start:

        Check the hivemq.log to confirm successful extension startup.

        After the successful installation of the Kafka extension, Kafka dashboard is also visible in the HiveMQ Control Center.

      6. Perform Quick Tests:

        Utilize the MQTT CLI to conduct quick tests. and verify same in the Kafka dashboard of the Control center.

 Related articles