Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Setup Kafka cluster: (This step is optional if you already have a running Kafka cluster).

    1. Create a namespace for Kafka and switch the context to it:

      Code Block
      kubectl create namespace kafka;  
      Code Block
      kubectl config set-context --current --namespace=kafka
    2. Add the repository for the Kafka Helm chart to your package manager.

      Code Block
      helm repo add bitnami https://charts.bitnami.com/bitnami
      Code Block
      helm repo update
    3. Deploy the Kafka server using the Helm chart.

      1. The below command deploys Kafka with 2 brokers (replicas).

        Code Block
        helm upgrade --install kafka bitnami/kafka --namespace=kafka --set replicaCount=2 
    4. Please notice the output of the command above, it provides critical data that is used for the next steps

      1. Consumers can access Kafka via port 9092 on the following DNS name from within your cluster: kafka.kafka.svc.cluster.local

      2. The CLIENT listener for Kafka client connections from within your cluster has been configured with the following security settings: SASL authentication

      3. To connect a client to your Kafka:

        1. username="user1"

        2. To get the password execute the command below:(skip % at the end)

          Code Block
          kubectl get secret kafka-user-passwords --namespace kafka \
            -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1;
  2. Deploy HiveMQ with Kafka extension

    1. Create a namespace for HiveMQ installation and switch the context to it:

      Code Block
      kubectl create namespace hivemq;  
      Code Block
      kubectl config set-context --current --namespace=hivemq
    2. Generate hivemq_values.yaml:

      Deploy HiveMQ using the HiveMQ Platform and generate the hivemq_values.yaml file:

      Code Block
      helm show values hivemq/hivemq-platform > hivemq_values.yaml
    3. Configure Kafka extension License:

      Follow the specific steps outlined for configuring the HiveMQ and Kafka extension license. Setting Up HiveMQ License for Your HiveMQ Cluster using HiveMQ Platform Operator

    4. Create config.xml for Kafka extension:

      1. Examples of the config.xml file are in the extension folder under conf/examples.

      2. Configure the username and password for successful authentication to your Kafka cluster.

        1. If you have followed our guide to deploy Kafka cluster then use the username as “user1” and password you copied from the step.

      3. Configure the <mqtt-to-kafka-mappings> or <kafka-to-mqtt-mappings> or <kafka-to-mqtt-transformers> or <mqtt-to-kafka-transformers> based on your need.

      4. Please refer to the example:

        Code Block
        <?xml version="1.0" encoding="UTF-8" ?>
        <kafka-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                             xsi:noNamespaceSchemaLocation="config.xsd">
            <kafka-clusters>
                <kafka-cluster>
                    <id>cluster01</id>
                    <bootstrap-servers>kafka.kafka.svc.cluster.local:9092</bootstrap-servers>
                    <authentication>
                        <scram-sha256>
                            <username>user1</username>
                            <password>here_is_your_password</password>
                        </scram-sha256>
                    </authentication>
                </kafka-cluster>
            </kafka-clusters>
        
            <mqtt-to-kafka-mappings>
                <mqtt-to-kafka-mapping>
                    <id>mapping01</id>
                    <cluster-id>cluster01</cluster-id>
                    <mqtt-topic-filters>
                        <mqtt-topic-filter>#</mqtt-topic-filter>
                    </mqtt-topic-filters>
                    <kafka-topic>test</kafka-topic>
                </mqtt-to-kafka-mapping>
            </mqtt-to-kafka-mappings>
            
            <mqtt-to-kafka-transformers>
                <mqtt-to-kafka-transformer>
                    <id>hello-world-transformer</id>
                    <cluster-id>cluster01</cluster-id>
                    <mqtt-topic-filters>
                        <mqtt-topic-filter>transform/#</mqtt-topic-filter>
                    </mqtt-topic-filters>
                    <transformer>com.hivemq.extensions.kafka.customizations.helloworld.MqttToKafkaHelloWorldTransformer</transformer>
                </mqtt-to-kafka-transformer>
            </mqtt-to-kafka-transformers>
        
            <kafka-to-mqtt-mappings>
                <kafka-to-mqtt-mapping>
                    <id>mapping02</id>
                    <cluster-id>cluster01</cluster-id>
                    <kafka-topics>
                        <kafka-topic>test</kafka-topic>
                        <kafka-topic-pattern>test-(.)*</kafka-topic-pattern>
                    </kafka-topics>
                </kafka-to-mqtt-mapping>
            </kafka-to-mqtt-mappings>
        </kafka-configuration>
    5. Create ConfigMap for Kafka configuration

      Code Block
      languagebash
      kubectl create configmap kafka-config --from-file config.xml -n <namespace>
    6. Create ConfigMap for Kafka transformers, If you are using any transformers then create Configmap to mount these transformer jar files to the extension’s customizations folder.
      (Skip this step if you are not using any transformer)

      Code Block
      kubectl create configmap kafka-transformers--from-file hivemq-kafka-hello-world-customization-4.25.0 -n <namespace>
    7. Deploy HiveMQ Platform Operator:

      Code Block
      languagebash
      helm install platform-op hivemq/hivemq-platform-operator -n <namespace>
    8. Edit hivemq_values.yaml: Modify the hivemq_values.yaml file to enable Kafka extension

      1. Configure the ConfigMap name created in the previous step. Set enabled: true to enable this extension.

        Code Block
        ...
        extensions:
          - name: hivemq-kafka-extension
            enabled: true
            # The Kafka extension supports hot-reload of the configuration.
            supportsHotReload: true
            # The ConfigMap name that contains the Kafka extension configuration.
            configMapName: "kafka-config"
            # The Secret name that contains request headers for the customization download.
            requestHeaderSecretName: ""
        #    # The URI to download a customization for the Kafka extension.
        #    customizationUri: ""
        
      2. if you are using kafka transformers, then to mount them add the following configuration block to hivemq platform values.yaml.(Skip this step if you are not using any transformer)

        Code Block
        ...
        additionalVolumes: 
          - type: configMap
            name: kafka-transformer
            path: /opt/hivemq/extensions/hivemq-kafka-extension/customizations
        
      3. Deploy HiveMQ:

        Code Block
        helm upgrade --install -f hivemq_values.yaml hivemq hivemq/hivemq-platform -n <namespace>
      4. Check Pod Status:

        Verify that all hivemq pods are running.

        Code Block
        kubectl get pods -n <namespace>
      5. Verify Enterprise Kafka Extension Start:

        Check the hivemq.log to confirm successful extension startup.

        Code Block
        kubectl logs <pod name> -n <namespace> | grep -v "kafka

        After the successful installation of the Kafka extension, Kafka dashboard is also visible in the HiveMQ Control Center.

      6. Perform Quick Tests:

        Utilize the MQTT CLI to conduct quick tests. and verify same in the Kafka dashboard of the Control center.

Info

Highlight important information in a panel like this one. To edit this panel's color or style, select one of the options in the menu.

...

Filter by label (Content by label)
showLabelsfalse
max5
spacescom.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@957
maxCheckboxfalse
sortmodified
showSpacefalse
reversetrue
typepagelabelskb-how-to-article
cqllabel = "kb-how-to-article" in ( "kafka" , "platform-operator" ) and type = "page" and space = "KB"
labelskb-how-to-article