Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This article shares https://hivemq.atlassian.net/wiki/spaces/KB/pages/2707128321/HiveMQ+Support+-+Customer+Onboarding+Readiness+Process in tabular formatthe article in tabular format.

For creating a ticket to Support team as a Self-Managed Customer, your team's access need to be manually created in the ticketing system. Please reach out to your Customer Success Manager for the same. In case one of your team members already has access to ticketing system, access to new users can be requested via "Create New User" form.

Info

If you encounter challenges with any of the steps mentioned below, please create a "Support Request" ticket and our support team will be happy to assist you further.

S No

Topic

Why is this important?

1

Upload a file to the customer support portal

Without the Upload button, it is impossible to upload files greater than 2MB. With the Upload button – up to 20GB.

2

Understand

90% of installation issues are happening because system requirements are not met. E.g. 50% of RAM, minimum CPU/RAM criteria not met etc.

3

Export HiveMQ Metrics from either Prometheus or a Grafana Datasource

HiveMQ’s metrics are an invaluable tool when analyzing the inner workings of a deployment. These can be both essential in determining the root causes of past events as well as when predicting future requirements for use case expansion.

Such analysis will oftentimes be most efficient when the full set of metrics - which run in the thousands - is available. To make this possible without the need for direct access to our customer's internal tooling, we have created a tool for the Prometheus/Grafana stack. With its help metrics can be exported, uploaded via our ticket system, and ultimately imported into our own monitoring environment.

4

HiveMQ Logs

$HIVEMQ_HOME/log/hivemq.log, event.log, migration.log

HiveMQ makes use of its logging to inform of various events. These range from INFO, to WARN and ERROR level messages and can often give direct indication of why a cluster is behaving as it is. Among other events, logs will clearly show when Overload Protection [OP documentation link] is activated or new cluster members join. They are also crucial when performing rolling upgrades [RU documentation link] as they provide clear information on when it is safe to remove the next node. Both hivemq.log, as well as event.log, will usually be some of the first things HiveMQ support will request when asked to analyze a cluster's behavior.

Log files are located on every node of the hiveMQ cluster, so it is important to collect and analyze all logs from all the nodes.

5

Access Log

In case of an issue with an Enterprise Security Extension, employees can collect $HIVEMQ_HOME/log/access/access.log: .

  1. You can audit all accesses, that the Enterprise Security Extension grants, retroactively.

  2. Crucial information for investigating client permission issues

6

Creating Thread Dump Series

When analyzing resource congestion on a broker, that cannot be explained with the help of the already detailed messages it can sometimes be necessary to obtain a more detailed view of a broker's active threads. Creating a thread dump series is helpful in such cases as all active threads spawned by the application are captured. As the thread dump only captures a single point in time, multiple thread dumps (series) are usually taken to make heavy consumers visible through analysis. The following KB article gives instructions on how to capture said files on various platforms:

7

Forcing a Heap Dump

By default HiveMQ is configured to dump the heap's content to disk should it ever fill up and crash the application. Since the content of such a heap dump will give an accurate picture of the heap's content at the time of creation, its analysis gives us the most precise tool in the search for root causes in such cases.

Please note that the creation of a heap dump is a 'stop the world' event from the point of view of the Java application since the runtime is blocked as it writes to disk. It is vital never to trigger the creation of a heap dump in a production environment unless specifically told to do so by a HiveMQ support engineer!

Should you wish to verify heap dumps are created correctly during an actual incident, you can force one by following our KB article [force heap dump link]. You may upload the resulting .hprof file to our Support Portal [upload files link] and we will be happy to verify its validity.

Upload of files up to a size of 20GB is supported. Compressing a .hprof using zip or similar techniques will drastically reduce their size and upload time.

8

HiveMQ Control Center access

  • actual information, such as client details, license details, online and offline clients, client- and shared subscription message queues, message throughput, and dropped messages

  • administrative tasks such as creating a trace recording for a client, generating a Diagnostic Archive for the cluster

9

Create a Diagnostic Archive with the HiveMQ Control Center.

If using HiveMQ version 4.2x(e.g. 4.21) or greater, have permission to generate a Diagnostic Archive from the HiveMQ Control Center

10

Add New Trace Recording

If troubleshooting MQTT clients, ensure the HiveMQ broker license includes the option to record Trace Recordings, and your employees can access and use it

11

Enable or Disable a HiveMQ Extension

For questions regarding any HiveMQ Extensions (e.g., HiveMQ Extension For Kafka), employees can collect the extension’s configuration files (e.g., $HIVEMQ_HOME/extensions/hivemq-kafka-extension/conf/config.xml, $HIVEMQ_HOME/extensions/hivemq-enterprise-security-extension/conf/config.xml, etc.) and check if the extension is DISABLED:

12

MQTT Essentials lectures, MQTT 5 Essentials lectures, MQTT 3 Specification, MQTT 5 Specification 

Have passed the MQTT Essentials and MQTT 5 Essentials lectures, be able to search through the official MQTT specification, and explain basic MQTT functionality:

...