Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

There are two ways to get a current an open and max file descriptor(FD) count.

  1. Enable monitoring with HiveMQ

...

  1. Prometheus Extension. The highly-performant metrics subsystem of HiveMQ lets you monitor relevant metrics with no reduction in system performance (even in low-latency high-throughput environments).

    In the context of this article you can check following

...

  1. metrics:
    com.hivemq.system.open-file-descriptor
    com.hivemq.system.max-file-descriptor

    These metrics provide counts of currently open file descriptors and max file descriptors.

  2. Also, as an alternative, you can use the below commands from your shell to get the requested metrics

    1. The following command displays the soft limit, hard limit, and units of measurement for each of the process's resource limits. From that list, you can get details about Max open files
      $ cat /proc/

...

    1. ${PID}/limits
      where ${PID} is your hivemq broker's process id.

    2. The following command can be used to know how many file descriptors are being used
      $ cat /proc/sys/fs/file-nr
      You can interpret the file content as
      column 1 = total allocated file descriptors (the number of file descriptors allocated since boot)
      column 2 = total free allocated file descriptors
      column 3 = maximum open file descriptors

    3. $ ulimit to know the number of open file descriptors per process

Step 2

Please execute the following steps to get the list of open files.

  1. Install lsof if it’s not available already else skip this step

    • $ apt update && apt install lsof

  2. switch to hivemq user (UID of the process)

    • $ su hivemq

  3. show all open files

    • $ lsof -p

...

    • ${PID} > open_files_pid_${PID}.txt

Share created open_files.txt with HiveMQ support to investigate further. 

How to increase Open File Descriptors Limits

User File Descriptors Limits

Per-user are set in the following file: /etc/security/limits.conf

The file has the following syntax:

Code Block
<domain> <type> <item> <value>

Setting soft and hard file limits for user hivemq:

Code Block
hivemq  soft  file  1000000
hivemq  hard  file  1000000

Checking User File Limits

User soft file limit:

Code Block
languagebash
ulimit -Su

User hard file limit:

Code Block
languagebash
ulimit -Hu

Checking for another user (hivemq):

Code Block
languagebash
su hivemq
ulimit -Hu
ulimit -Su

Kernel File Descriptors Limits

System wide open file descriptor limits are set with sysctl. For example, increase the limit to 1000000 open file descriptors:

Code Block
languagebash
sysctl -w fs.file-max=1000000

Note, this will only work until the next reboot.

Check your work:

Code Block
languagebash
cat /proc/sys/fs/file-max

To make permanent changes (persisted after the reboot), it is necessary to edit the /etc/sysctl.conf

Code Block
languagebash
vi /etc/sysctl.conf

Add the following line:

Code Block
languagepy
fs.file-max=500000

Users will need to logout and login again for the changes to take effect. If you want to apply the limit immediately, you can use the following command:

Code Block
languagebash
# sysctl -p

Filter by label (Content by label)
showLabelsfalse
max5
spacescom.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@957
sortmodified
showSpacefalse
reversetrue
typepage
cqllabel = "customer" and type = "page" and space = "KB" and title ~ "too many open file"
labelskb-how-to-article