/
Too many open files

Too many open files

Observation

The hivemq.log shows statements like the following:


2020-04-29 11:55:57.467 ERROR 999 — [ool-20-thread-1] oshi.util.FileUtil : Error reading file /proc/stat. {}java.nio.file.FileSystemException: /proc/stat: Too many open files at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[na:1.8.0_212] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.8.0_212] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[na:1.8.0_212] at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[na:1.8.0_212] at java.nio.file.Files.newByteChannel(Files.java:361) ~[na:1.8.0_212] at java.nio.file.Files.newByteChannel(Files.java:407) ~[na:1.8.0_212] at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384) ~[na:1.8.0_212] at java.nio.file.Files.newInputStream(Files.java:152) ~[na:1.8.0_212] at java.nio.file.Files.newBufferedReader(Files.java:2784) ~[na:1.8.0_212] at java.nio.file.Files.readAllLines(Files.java:3202) ~[na:1.8.0_212] at oshi.util.FileUtil.readFile(Unknown Source) [hivemq.jar:3.4.4] at oshi.util.FileUtil.readFile(Unknown Source) [hivemq.jar:3.4.4] at oshi.hardware.platform.linux.LinuxCentralProcessor.getProcessorCpuLoadTicks(Unknown Source) [hivemq.jar:3.4.4] at bV.a.c(Unknown Source) [hivemq.jar:3.4.4] at bV.a.a(Unknown Source) [hivemq.jar:3.4.4] at bV.b.run(Unknown Source) [hivemq.jar:3.4.4] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_212] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_212] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_212] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_212]

Consequence

The impact of this is severe.

From Wikipedia (https://en.wikipedia.org/wiki/File_descriptor)

In Unix and related computer operating systems, a file descriptor (FD, less frequently fildes) is an abstract indicator (handle) used to access a file or other input/output resource, such as a pipe or network socket.

The most immediate impact this will have for HiveMQ is that the broker cannot accept any additional connection, as there are not file descriptors for network sockets left.
This might also impact the ability to write log files or replicate data.

Solution

Increase the open file limit in your system.

Add the following lines to the /etc/security/limits.conf file:

hivemq hard nofile 1000000 hivemq soft nofile 1000000 root hard nofile 1000000 root soft nofile 1000000

This blog post add some context.

 

Related content

Control Center User Permissions not working correctly
Control Center User Permissions not working correctly
Read with this
Systemd <=240 version defaults Max open files to 65535
Systemd <=240 version defaults Max open files to 65535
More like this
HiveMQ Kubernetes Operator UPGRADE FAILED: post-upgrade hooks failed: job failed
HiveMQ Kubernetes Operator UPGRADE FAILED: post-upgrade hooks failed: job failed
Read with this
Get information about file descriptors used by system
Get information about file descriptors used by system
More like this
Client Event History performance decreases over time
Client Event History performance decreases over time
Read with this
“Not Enough Disk Space” Warning in the HiveMQ Control Center
“Not Enough Disk Space” Warning in the HiveMQ Control Center
More like this