Cassandra Crash - Too many open files
We recentaly had our Cassandra is crashing with the following kind of error: Too many open files
We recentaly had our Cassandra is crashing with the following kind of error
Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: /path/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/ mc_txn_flush_8bdc78f0-7d48-11e9-9b2e-0f78ea2b6c2b.log: Too many open files
This usually means your Cassandra process is running into system imposed limits on number of open files.
The first step to check is to make sure you have relaxed your system's hard limits on open files as per the recommended Cassandra settings.
Set the following limits in /etc/security/limits.conf
* soft nofile 100000 * hard nofile 100000 root soft nofile 100000 root hard nofile 100000
For this to take effect you can either reboot the machine or you can run
sudo sysctl -p
So far so good, however, whenever we restarted Cassandra the process would crash in a few minutes with a similar error. We also checked the open file limits of the process in question by first finding the pid
of the Cassandra process and runnign
cat /proc/pid/limits
To our surprise, we found that the limit was still
Max open files 4096 4096 files
We found the solution here.
For processes managed by systemctl, these limits may not be honered and you would need to provide the right limits in /etc/systemd/system/cassandra.service
.
[Service] LimitNOFILE=100000
Then for this to take effect run
sudo systemctl daemon-reload
and restart Cassandra service. This fixed the open files limit and Cassandra started to hum along once again.
How much is a great User Experience worth to you?
Browsee helps you understand your user's behaviour on your site. It's the next best thing to talking to them.