Message Dropped - The QoS 0 Memory Limit Exceeded | Troubleshooting
What Does This Message Mean?
First, some context on QoS 0 :
As defined by the MQTT specification, messages sent with a Quality of Service level set to 0, or otherwise noted as QoS 0, utilize a "fire and forget" delivery method with no acknowledgments or retries. Consequently, there is no inherent queuing with QoS 0. This means that messages may be lost if no subscriber is present or if issues occur in transit. Publishers and subscribers are decoupled, with no guarantee of end-to-end delivery.
What does QoS 0 Memory Limit Exceeded Indicate? :
It is important to understand that this:
is not a broker error
occurs when the broker's global memory limit, or per-client limit for QoS 0 messages is exceeded due to the client(s) being unable to consume messages in a timely fashion.
could be caused by a temporary network disruption, disconnected client, or other sources such as a client receiving messages at a rate it is unable to keep up with via message consumption.
HiveMQ Broker Features Related to QoS 0
Note: Both of the following features exceed the MQTT specification. That is they are features designed to support QoS 0 message operations while staying compliant with MQTT specification.
Global QoS 0 Buffer (Internal)
The HiveMQ Enterprise Broker provides a global QoS 0 memory limit that is defined as 1/4th (25%) of the total heap memory provided to the JVM when executing the HiveMQ broker. This is typically defined in the run.sh execution script, located within your HiveMQ installation directory at /HiveMQ/bin/run.sh.
Per-Client QoS 0 Buffer (Internal)
Additionally, each client is provided a 5 MB cache by default, with which QoS 0 clients will receive messages from, and acts as a buffer.
Behavior of Full Buffers
When either of these limits are exceeded, additional QoS 0 messages published will be dropped. In the case of the global memory limit being exceeded, this will be true for any QoS 0 messages that cannot be immediately consumed. For per-client QoS 0 memory limits, only messages for the associated client will be dropped.
What does a QoS 0 Dropped Message Look Like?
These can be indicated in a number of locations, and are often quite easy to identify.
As these are considered dropped messages, these will be recorded in a number of places within your HiveMQ Control Center
The Dashboard will display a warning message within the notifications section, indicating dropped QoS 0 messages.
The Analytics > Dropped Messages tab will show a graph of all dropped messages and timings for these drops.
The HiveMQ Event.log file, located by default within the HiveMQ installation directory
/HiveMQ/log/event.log, will record each event of dropped messages, including QoS 0 dropped messages. These will appear in the following format :2025-04-01 13:48:22,590 - Outgoing publish message was dropped. Receiving consumer: testClient, topic: spBv1.0/test_topic/DDATA/1, qos: 0, reason: The QoS 0 memory limit exceeded, size: 5,442,880 bytes, max: 5,242,880 bytes.
Once MQTT Add-on Topics have been enabled on the HiveMQ Broker, any MQTT client with appropriate permissions can subscribe to the topic
$dropped/#to receive all dropped messages. This can be coordinated with a monitoring MQTT client for great indication in the event of a dropped message.
If you have a monitoring setup configured for your HiveMQ broker, the following metrics can be used to identify occurrences of QoS 0 message drops, or other expected QoS 0 behaviors :
com.hivemq.qos-0-memory.used: The current bytes used by QoS 0 messagescom.hivemq.qos-0-memory.max: The maximum allowed bytes for QoS 0 messagescom.hivemq.qos-0-memory.exceeded.per-client: The number of clients exceeding QoS 0 memorycom.hivemq.messages.dropped.qos-0-memory-exceeded.count: The number of PUBLISH messages that dropped because the global memory limit for QoS 0 messages was exceeded
For a full list of available metrics, see the ‘Monitoring' link above to our documentation page.
Troubleshooting
Here are some suggestions to better identify when QoS 0 message drops occur, obtain additional details about the messages dropped, and some configuration suggestions.
Monitor occurrences of dropped messages by utilizing the monitoring and alerting locations noted in the section above :
Subscribe to the MQTT Add-on topic
$dropped/#to get all dropped messages.Use metrics, such as
com.hivemq.messages.dropped.qos-0-memory-exceeded.countto monitor dropped QoS 0 messages for memory-related reasons.
Check if individual QoS 0 message sizes are exceeding the current QoS limit.
Examine the metrics listed in the Monitoring section
Recommendations and Potential Solutions
Adjust Client configuration to keep up with inbound message flow.
If the client can be configured to allow for more inbound messages, this may be a viable option to prevent the memory buffed from filling.
If the client performance does not match expected performance, such as not being able to keep up with the current message rate, perform some further evaluation of the client to identify reasons for this performance gap.
Clients exceeding QoS 0 Limits
If individual messages are exceeding QoS limits:
Consider reducing the per-message size, or
Split payloads across multiple publishes
Utilize Shared Subscriptions
Utilize additional available strategies to reduce per-client message load.
Shared Subscriptions allow clients to acquire messages from a shared subscription, allowing each client within the shared subscription to take turns receiving messages, thereby reducing the message rates for each individual client.
Related Documents
DocsHiveMQ PlatformHiveMQ Control CenterAnalyticsControl Center Analytics :: HiveMQ Documentation
DocsHiveMQ PlatformMQTT Add-onsHiveMQ MQTT Add-ons :: HiveMQ Documentation
DocsHiveMQ PlatformMonitoringMonitoring :: HiveMQ Documentation