Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

This guide provides detailed steps for configuring the Enterprise Kafka Extension with the HiveMQ Platform Operator. Ensure you meet the specified prerequisites before proceeding.

Prerequisites:

  1. Helm version v3+

  2. Running Kubernetes cluster version 1.18.0 or higher

  3. kubectl latest version

\uD83D\uDCD8 Instructions

  1. Setup Kafka cluster: (This step is optional if you already have a running Kafka cluster).

    1. Create a namespace for Kafka and switch the context to it:

      kubectl create namespace kafka;  
      kubectl config set-context --current --namespace=kafka
    2. Add the repository for the Kafka Helm chart to your package manager.

      helm repo add bitnami https://charts.bitnami.com/bitnami
      helm repo update
    3. Deploy the Kafka server using the Helm chart.

      1. The below command deploys Kafka with 2 brokers (replicas).

        helm upgrade --install kafka bitnami/kafka --namespace=kafka --set replicaCount=2 
    4. Please notice the output of the command above, it provides critical data that is used for the next steps

      1. Consumers can access Kafka via port 9092 on the following DNS name from within your cluster: kafka.kafka.svc.cluster.local

      2. The CLIENT listener for Kafka client connections from within your cluster has been configured with the following security settings: SASL authentication

      3. To connect a client to your Kafka:

        1. username="user1"

        2. To get the password execute the command below:(skip % at the end)

          kubectl get secret kafka-user-passwords --namespace kafka \
            -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1;
  2. Deploy HiveMQ with Kafka extension

    1. Create a namespace for HiveMQ installation and switch the context to it:

      kubectl create namespace hivemq;  
      kubectl config set-context --current --namespace=hivemq
    2. Generate hivemq_values.yaml:

      Deploy HiveMQ using the HiveMQ Platform and generate the hivemq_values.yaml file:

      helm show values hivemq/hivemq-platform > hivemq_values.yaml
    3. Configure Kafka extension License:

      Follow the specific steps outlined for configuring the HiveMQ and Kafka extension license. Setting Up HiveMQ License for Your HiveMQ Cluster using HiveMQ Platform Operator

    4. Create config.xml for Kafka extension:

      1. Examples of the config.xml file are in the extension folder under conf/examples.

      2. Configure the username and password for successful authentication to your Kafka cluster.

        1. If you have followed our guide to deploy Kafka cluster then use the username as “user1” and password you copied from the step.

      3. Configure the <mqtt-to-kafka-mappings> or <kafka-to-mqtt-mappings> or <kafka-to-mqtt-transformers> or <mqtt-to-kafka-transformers> based on your need.

      4. Please refer to the example:

        <?xml version="1.0" encoding="UTF-8" ?>
        <kafka-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                             xsi:noNamespaceSchemaLocation="config.xsd">
            <kafka-clusters>
                <kafka-cluster>
                    <id>cluster01</id>
                    <bootstrap-servers>kafka.kafka.svc.cluster.local:9092</bootstrap-servers>
                    <authentication>
                        <scram-sha256>
                            <username>user1</username>
                            <password>here_is_your_password</password>
                        </scram-sha256>
                    </authentication>
                </kafka-cluster>
            </kafka-clusters>
        
            <mqtt-to-kafka-mappings>
                <mqtt-to-kafka-mapping>
                    <id>mapping01</id>
                    <cluster-id>cluster01</cluster-id>
                    <mqtt-topic-filters>
                        <mqtt-topic-filter>#</mqtt-topic-filter>
                    </mqtt-topic-filters>
                    <kafka-topic>test</kafka-topic>
                </mqtt-to-kafka-mapping>
            </mqtt-to-kafka-mappings>
            
            <mqtt-to-kafka-transformers>
                <mqtt-to-kafka-transformer>
                    <id>hello-world-transformer</id>
                    <cluster-id>cluster01</cluster-id>
                    <mqtt-topic-filters>
                        <mqtt-topic-filter>transform/#</mqtt-topic-filter>
                    </mqtt-topic-filters>
                    <transformer>com.hivemq.extensions.kafka.customizations.helloworld.MqttToKafkaHelloWorldTransformer</transformer>
                </mqtt-to-kafka-transformer>
            </mqtt-to-kafka-transformers>
        
            <kafka-to-mqtt-mappings>
                <kafka-to-mqtt-mapping>
                    <id>mapping02</id>
                    <cluster-id>cluster01</cluster-id>
                    <kafka-topics>
                        <kafka-topic>test</kafka-topic>
                        <kafka-topic-pattern>test-(.)*</kafka-topic-pattern>
                    </kafka-topics>
                </kafka-to-mqtt-mapping>
            </kafka-to-mqtt-mappings>
        </kafka-configuration>
    5. Create ConfigMap for Kafka configuration

      kubectl create configmap kafka-config --from-file config.xml -n <namespace>
    6. Create ConfigMap for Kafka transformers, If you are using any transformers then create Configmap to mount these transformer jar files to the extension’s customizations folder.
      (Skip this step if you are not using any transformer)

      kubectl create configmap kafka-transformers--from-file hivemq-kafka-hello-world-customization-4.25.0 -n <namespace>
    7. Deploy HiveMQ Platform Operator:

      helm install platform-op hivemq/hivemq-platform-operator -n <namespace>
    8. Edit hivemq_values.yaml: Modify the hivemq_values.yaml file to enable Kafka extension

      1. Configure the ConfigMap name created in the previous step. Set enabled: true to enable this extension.

        ...
        extensions:
          - name: hivemq-kafka-extension
            enabled: true
            # The Kafka extension supports hot-reload of the configuration.
            supportsHotReload: true
            # The ConfigMap name that contains the Kafka extension configuration.
            configMapName: "kafka-config"
            # The Secret name that contains request headers for the customization download.
            requestHeaderSecretName: ""
        #    # The URI to download a customization for the Kafka extension.
        #    customizationUri: ""
        
      2. if you are using kafka transformers, then to mount them add the following configuration block to hivemq platform values.yaml.(Skip this step if you are not using any transformer)

        ...
        additionalVolumes: 
          - type: configMap
            name: kafka-transformer
            path: /opt/hivemq/extensions/hivemq-kafka-extension/customizations
        
      3. Deploy HiveMQ:

        helm upgrade --install -f hivemq_values.yaml hivemq hivemq/hivemq-platform -n <namespace>
      4. Check Pod Status:

        Verify that all hivemq pods are running.

        kubectl get pods -n <namespace>
      5. Verify Enterprise Kafka Extension Start:

        Check the hivemq.log to confirm successful extension startup.

        kubectl logs <pod name> -n <namespace> | grep -v "kafka

        After the successful installation of the Kafka extension, Kafka dashboard is also visible in the HiveMQ Control Center.

      6. Perform Quick Tests:

        Utilize the MQTT CLI to conduct quick tests. and verify same in the Kafka dashboard of the Control center.

Highlight important information in a panel like this one. To edit this panel's color or style, select one of the options in the menu.

  • No labels