If you're running a high-traffic website or handling a large number of concurrent connections with Nginx, you might encounter the dreaded "too many open files" error.
By default, Nginx and Linux impose limits on the number of connections and open files a process can handle.
In this guide, we'll walk through increasing these limits to optimize Nginx's performance allowing it to handle multiple times the simoultaneus connections of the default settings.
Each active connection in Nginx consumes file descriptors (FDs).
The default open file limit, for the user running the nginx prcoess, in Linux based systems is often quite low, like around 1000 is considered low, and varies by distribution.
Default worker_connections limit for Nginx is also set low by default, around 1000 or below, and can easily be exhausted during traffic spikes.
Assuming default open file limit, for the user running the nginx process, is 1024 and connections for each nginx worker is set to 768 by default, we can calculate how many connections nginx is able to proxy.
If nginx is set to allow a maximum of 768 connections * 2 FDs per connection => 1536 file descriptors per worker, which defaults to the number of CPU cores off the system, e.g. 1536 * number of CPU cores.
If the nginx user open file limit is set to 1024, nginx process can theoretically proxy only 1024 / 2 => 512 connections in total acrtoss all workers, before encountering the dreaded "too many open files" error.
It gets worse, nginx also needs to open files to serve them, all the js, css, html, fonts, images etc, which use one file descriptor per file.
Increasing the number of files and connections that Nginx can handle requires that the operating system is set up correctly for it to work.
First check which user is running nginx by running the command:
ps -eo user,comm | grep nginx
Example output:
www-data nginx
root nginx
This means:
If Nginx is running as a different user, adjust accordingly (e.g., if it's running as nginx, use nginx in the following commands).
This means Nginx worker processes are running under the www-data user.
The master process running as root is responsible for managing and spawning worker processes, but it doesn’t handle client connections directly.
The master process does not need to have a high open file limit because it primarily handles configuration, signals, and managing worker processes.
Master process usually doesn’t need changes unless running at a very large scale.
Now, check the limits for the worker processes, which run under the www-data or nginx user.
# If Nginx is running as www-data
cat /proc/$(pgrep -u www-data nginx | head -n 1)/limits | grep "open files"
# or
# If Nginx is running as nginx
cat /proc/$(pgrep -u nginx nginx | head -n 1)/limits | grep "open files"
Example output:
Max open files 1024 4096 files
This shows the soft and hard limits for open files for the worker process.
To check the limits for the user running Nginx (e.g., www-data or nginx), run:
sudo -u www-data ulimit -Sn # Check the soft limit
sudo -u www-data ulimit -Hn # Check the hard limit
# or
sudo -u nginx ulimit -Sn # Check the soft limit
sudo -u nginx ulimit -Hn # Check the hard limit
Check the system-wide limit for open files:
cat /proc/sys/fs/file-max
Modify the limits for the Nginx user (e.g., www-data or nginx).
Open the file for editing using nano:
sudo nano /etc/security/limits.conf
Safe limits for soft open files, you can probably go much higher with some testing.
Add the desired limit for the user running nginx workers by adding the lines below at the end of the file:
www-data soft nofile 65535
www-data nofile 65535
# or
nginx soft nofile 65535
nginx hard nofile 65535
Open the file for editing using nano:
sudo nano /etc/pam.d/common-session
Add this line:
session required pam_limits.so
2.3 Edit System-Wide Limits for Users
Open the file for editing using nano:
sudo nano /etc/systemd/system/nginx.service.d/override.conf
Add this line:
[Service]
LimitNOFILE=65535
Then reload the daemon:
sudo systemctl daemon-reexec
3. Increase System-Wide Limits
Modify kernel parameters to allow more open files.
Open the file for editing using nano:
sudo nano /etc/sysctl.conf
Add this line:
fs.file-max = 2097152
Apply changes:
sudo sysctl -p
3.2 Edit /etc/systemd/system.conf
Open the file for editing using nano:
sudo nano /etc/systemd/system.conf
Add this line:
DefaultLimitNOFILE=65535
3.3 Edit /etc/systemd/user.conf
Open the file for editing using nano:
sudo nano /etc/systemd/user.conf
Add this line:
DefaultLimitNOFILE=65535
4. Verify the Changes
After restarting, check if the new limits are applied.
sudo systemctl restart systemd-logind
sudo systemctl restart nginx
Check the limits for the Nginx worker processes:
# If Nginx is running as www-data
cat /proc/$(pgrep -u www-data nginx | head -n 1)/limits | grep "open files"
# or
# If Nginx is running as nginx
cat /proc/$(pgrep -u nginx nginx | head -n 1)/limits | grep "open files"
Check the system-wide open file limit:
cat /proc/sys/fs/file-max
5. Tweak Nginx Configuration Settings
To handle a large number of connections, you should also adjust the following settings in the Nginx configuration.
The key settings to tweak are:
5.1 Increase the limit for open files
Limits on how many files each Nginx worker can open is set using worker_rlimit_nofile in the main Nginx config file, usually found in /etc/nginx/nginx.conf.
This will allow nginx to open more static files (html, js, css, fonts) for serving and to be able to open more connections to servers for proxying.
Set worker_rlimit_nofile in the nginx config, with values to use only half the maximum value set in the system.
Leave half of the allowed limit for other users and processes.
Edit the Nginx configuration file:
sudo nano /etc/nginx/nginx.conf
Modify the worker_rlimit_nofile setting:
# Set worker_rlimit_nofile to the same or higher than the system's file limit
worker_rlimit_nofile 65535;
Restart nginx for the settings to take effect.
After enabling each nginx worker to open enough files by setting a high enough value for worker_rlimit_nofile, it's time to raise the number of connections that nginx can open.
Nginx uses 2 file descriptors per connection for proxying, one for receiving and one for sending.
16384 files open / 1 worker / 2 files descriptors per connection => max 8192 connections per worker, set to 4096 to leave 8192 for other files.
This way Nginx should be able to proxy 4096 concurrent connections per CPU and be able to serve files at the same time.
Edit the Nginx configuration file:
sudo nano /etc/nginx/nginx.conf
Modify the worker_connections setting:
# Adjust worker_connections based on your expected traffic load
worker_connections 10240;
Restart nginx for the settings to take effect.
By following these steps, you've optimized open file limits for Nginx:
This setup ensures Nginx can handle more connections efficiently by allowing more open files, especially for proxying and serving a large number of files.
These limits should be safe, the best is, as always, to test and tweak with your own particular setup.
You might be able to go even higher.
COPYRIGHT © 2025 | TERMS & CONDITIONS | PRIVACY | BUILT IN SYSTEME.IO