This article elaborates on the points in https://hivemq.atlassian.net/wiki/x/h4BCoQ in more detailshares the article in tabular format.
For creating a ticket to Support team as a Self-Managed Customer, your team's access need to be manually created in the ticketing system. Please reach out to your Customer Success Manager for the same. In case one of your team members already has access to ticketing system, access to new users can be requested via "Create New User" form.
Info |
---|
If you encounter challenges with any of the steps mentioned below, please create a "Support Request" ticket and our support team will be happy to assist you further. |
S No | Topic | Why is this important? |
---|---|---|
1 | Without the Upload button, it is impossible to upload files greater than 2MB. With the Upload button – up to 20GB. | |
2 | Understand
| 90% of installation issues are happening because system requirements are not met. E.g. 50% of RAM, minimum CPU/RAM criteria not met etc. |
3 | Export HiveMQ Metrics from either Prometheus or a Grafana Datasource | HiveMQ’s metrics are an invaluable tool when analyzing the inner workings of a deployment. These can be both essential in determining the root causes of past events as well as when predicting future requirements for use case expansion. Such analysis will oftentimes be most efficient when the full set of metrics - which run in the thousands - is available. To make this possible without the need for direct access to our customer's internal tooling, we have created a tool for the Prometheus/Grafana stack. With its help metrics can be exported, uploaded via our ticket system, and ultimately imported into our own monitoring environment. |
4 |
HiveMQ makes use of its logging to inform of various events. These range from INFO, to WARN and ERROR level messages and can often give direct indication of why a cluster is behaving as it is. Among other events, logs will clearly show when Overload Protection [OP documentation link] is activated or new cluster members join. They are also crucial when performing rolling upgrades [RU documentation link] as they provide clear information on when it is safe to remove the next node. Both hivemq.log, as well as event.log, will usually be some of the first things HiveMQ support will request when asked to analyze a cluster's behavior. Log files are located on every node of the hiveMQ cluster, so it is important to collect and analyze all logs from all the nodes. | |
5 | In case of an issue with an Enterprise Security Extension, employees can collect
| |
6 | When analyzing resource congestion on a broker, that cannot be explained with the help of the already detailed messages it can sometimes be necessary to obtain a more detailed view of a broker's active threads. Creating a thread dump series is helpful in such cases as all active threads spawned by the application are captured. As the thread dump only captures a single point in time, multiple thread dumps (series) are usually taken to make heavy consumers visible through analysis. The following KB article gives instructions on how to capture said files on various platforms: | |
7 | By default HiveMQ is configured to dump the heap's content to disk should it ever fill up and crash the application. Since the content of such a heap dump will give an accurate picture of the heap's content at the time of creation, its analysis gives us the most precise tool in the search for root causes in such cases. Please note that the creation of a heap dump is a 'stop the world' event from the point of view of the Java application since the runtime is blocked as it writes to disk. It is vital never to trigger the creation of a heap dump in a production environment unless specifically told to do so by a HiveMQ support engineer! Should you wish to verify heap dumps are created correctly during an actual incident, you can force one by following our KB article [force heap dump link]. You may upload the resulting .hprof file to our Support Portal [upload files link] and we will be happy to verify its validity. Upload of files up to a size of 20GB is supported. Compressing a .hprof using zip or similar techniques will drastically reduce their size and upload time. | |
8 | HiveMQ Control Center access |
|
9 | If using HiveMQ version 4.2x(e.g. 4.21) or greater, have permission come with the ability to generate a Diagnostic Archive from the HiveMQ Control Center. This will include information about the cluster members that can be useful when trying to understand the deployment in depth. Besides information such as the loaded configuration (passwords are redacted) it will also include a thread dump series as well as a short snippet of HiveMQ metrics spanning a few minutes. | |
10 | If troubleshooting MQTT clients, ensure the HiveMQ broker license includes the option to record Trace Recordings, and your employees can access and use it | |
11 | For questions regarding any HiveMQ Extensions (e.g., HiveMQ Extension For Kafka), employees can collect the extension’s configuration files (e.g., $HIVEMQ_HOME/extensions/hivemq-kafka-extension/conf/config.xml, $HIVEMQ_HOME/extensions/hivemq-enterprise-security-extension/conf/config.xml, etc.) and check if the extension is DISABLED: | |
12 | MQTT Essentials lectures, MQTT 5 Essentials lectures, MQTT 3 Specification, MQTT 5 Specification | Have passed the MQTT Essentials and MQTT 5 Essentials lectures, be able to search through the official MQTT specification, and explain basic MQTT functionality: |
...