For creating a ticket to Support team as a Self-Managed Customer, your team's access need to be manually created in the ticketing system. Please reach out to your Customer Success Manager for the same. In case one of your team members already has access to ticketing system, access to new users can be requested via "Create New User" form.
If you encounter challenges with any of the steps mentioned below, please create a "Support Request" ticket and our support team will be happy to assist you further.
Effective crisis management starts with proactive preparation. Learn how your team can prepare themselves to quickly provide essential data to HiveMQ Support during issues, ensuring faster problem identification and resolution. (Tabular Format).
Understand how to upload files to the HiveMQ Customer Support Portal using the “Upload” button: Upload a file to the customer support portal.
Without the Upload button, it is impossible to upload files greater than 2MB. With the Upload button – up to 20GB.
Be aware of System requirements, Heap requirements – 50% of RAM and Sanity checks
90% of installation issues are happening because system requirements are not met.
Have monitoring set up for the HiveMQ environment and permissions to export bulk data from Grafana or Prometheus: Export HiveMQ Metrics from either Prometheus or a Grafana Datasource.
HiveMQ’s metrics are an invaluable tool when analyzing the inner workings of a deployment. These can be both essential in determining the root causes of past events as well as when predicting future requirements for use case expansion.
Such analysis will oftentimes be most efficient when the full set of metrics - which run in the thousands - is available. To make this possible without the need for direct access to our customer's internal tooling, we have created a tool for the Prometheus/Grafana stack. With its help metrics can be exported, uploaded via our ticket system, and ultimately imported into our own monitoring environment.
Have access to HiveMQ logs (
$HIVEMQ_HOME/log/hivemq.log
,event.log
,migration.log
) and can collect log files from all broker nodes: HiveMQ Logs.HiveMQ makes use of its logging to inform of various events. These range from INFO, to WARN and ERROR level messages and can often give direct indication of why a cluster is behaving as it is. Among other events, logs will clearly show when Overload Protection [OP documentation link] is activated or new cluster members join. They are also crucial when performing rolling upgrades [RU documentation link] as they provide clear information on when it is safe to remove the next node. Both hivemq.log, as well as event.log, will usually be some of the first things HiveMQ support will request when asked to analyze a cluster's behavior.
Log files are located on every node of the hiveMQ cluster, so it is important to collect and analyze all logs from all the nodes.
Be able to Create Thread Dump Series: Creating Thread Dump Series.
When analyzing resource congestion on a broker, that cannot be explained with the help of the already detailed messages it can sometimes be necessary to obtain a more detailed view of a broker's active threads. Creating a thread dump series is helpful in such cases as all active threads spawned by the application are captured. As the thread dump only captures a single point in time, multiple thread dumps (series) are usually taken to make heavy consumers visible through analysis. The following KB article gives instructions on how to capture said files on various platforms: Creating Thread Dump Series.
Be able to Force a Heap Dump: Forcing a Heap Dump.
By default HiveMQ is configured to dump the heap's content to disk should it ever fill up and crash the application. Since the content of such a heap dump will give an accurate picture of the heap's content at the time of creation, its analysis gives us the most precise tool in the search for root causes in such cases.
Please note that the creation of a heap dump is a 'stop the world' event from the point of view of the Java application since the runtime is blocked as it writes to disk. It is vital never to trigger the creation of a heap dump in a production environment unless specifically told to do so by a HiveMQ support engineer!
Should you wish to verify heap dumps are created correctly during an actual incident, you can force one by following our KB article [force heap dump link]. You may upload the resulting .hprof file to our Support Portal [upload files link] and we will be happy to verify its validity.
Upload of files up to a size of 20GB is supported. Compressing a .hprof using zip or similar techniques will drastically reduce their size and upload time.
Have access to the HiveMQ Control Center to:
actual information, such as:
client details, license details, online and offline clients, client- and shared subscription message queues, message throughput, and dropped messages
administrative tasks such as:
creating a trace recording for a client, generating a Diagnostic Archive for the cluster
If using HiveMQ version 4.2x(e.g. 4.21) or greater, have permission to generate a Diagnostic Archive from the HiveMQ Control Center: Create a Diagnostic Archive with the HiveMQ Control Center.
If troubleshooting MQTT clients, ensure the HiveMQ broker license includes the option to record Trace Recordings, and your employees can access and use it: Add New Trace Recording.
In case of an issue with an Enterprise Security Extension, employees can collect
$HIVEMQ_HOME/log/access/access.log
: Access Log.You can audit all accesses, that the Enterprise Security Extension grants, retroactively.
Crucial information for investigating client permission issues
For questions regarding any HiveMQ Extensions (e.g., HiveMQ Extension For Kafka), employees can collect the extension’s configuration files (e.g., $HIVEMQ_HOME/extensions/hivemq-kafka-extension/conf/config.xml, $HIVEMQ_HOME/extensions/hivemq-enterprise-security-extension/conf/config.xml, etc.) and check if the extension is DISABLED: Enable or Disable a HiveMQ Extension.
Have passed the MQTT Essentials and MQTT 5 Essentials lectures, be able to search through the official MQTT specification, and explain basic MQTT functionality: MQTT Essentials lectures, MQTT 5 Essentials lectures, MQTT 3 Specification, MQTT 5 Specification
Please feel free to share the KB with your internal teams as a reference.