Why Nginx Hardening Matters in 2026
Nginx powers over 35% of all websites and serves as the go-to reverse proxy for countless production environments. That kind of ubiquity paints a massive target on it — webshells account for nearly 50% of Linux malware exploits, and brute-force attacks make up a staggering 89% of all endpoint activity on public-facing infrastructure. A default Nginx installation? It exposes version strings, accepts oversized requests, happily serves traffic over plaintext, and runs with way more privileges than it should.
I've spent years hardening Nginx deployments across everything from small startups to enterprise environments, and the pattern is always the same: people install it, get their app running, and move on. The hardening never happens — until something goes wrong.
This guide walks you through a complete, layered hardening strategy for Nginx on Linux. We're covering TLS 1.3 cipher selection, ModSecurity WAF integration, request-level rate limiting, and systemd sandboxing. Every configuration block here is production-tested against Nginx 1.28.x (stable) on Ubuntu 24.04 and RHEL 9, and each section targets a specific attack class so you can prioritize based on your actual threat model.
Pre-Hardening: Installation and Baseline Audit
Install the Latest Stable Nginx
Always run the latest stable release — that's Nginx 1.28.3 as of April 2026 — which patches several critical CVEs including buffer overflows in the DAV and MP4 modules. Use the official Nginx repository rather than your distribution's potentially outdated package:
# Ubuntu/Debian — add the official Nginx stable repo
sudo tee /etc/apt/sources.list.d/nginx.list <<EOF
deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu $(lsb_release -cs) nginx
EOF
sudo apt update && sudo apt install nginx
# RHEL 9 / Rocky Linux 9
sudo tee /etc/yum.repos.d/nginx.repo <<EOF
[nginx-stable]
name=nginx stable repo
baseurl=https://nginx.org/packages/rhel/9/\$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOF
sudo dnf install nginx
Audit Your Current Exposure
Before you change anything, capture your baseline security posture. You might be surprised (or horrified) by what you find:
# Check the Nginx version and compiled modules
nginx -V 2>&1 | head -5
# Audit the systemd service security score (0=best, 10=worst)
systemd-analyze security nginx
# Scan your TLS configuration (from a remote host)
nmap --script ssl-enum-ciphers -p 443 your-server.example.com
A default Nginx systemd unit typically scores 9.2 out of 10 on security exposure — meaning almost no sandboxing at all. By the end of this guide, you'll bring that below 3.0.
TLS 1.3 Hardening
Transport encryption is your first line of defense against eavesdropping, credential theft, and session hijacking. Weak protocol versions (TLS 1.0, 1.1) and legacy ciphers like RC4, 3DES, and CBC-mode still show up in way too many deployments despite known vulnerabilities like POODLE, BEAST, and Lucky13.
Protocol and Cipher Configuration
# /etc/nginx/conf.d/tls-hardening.conf
ssl_protocols TLSv1.2 TLSv1.3;
# TLS 1.2 ciphers — AEAD only, no CBC
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
# Let clients pick the cipher that best matches their hardware
# (safe when all ciphers in the list are strong)
ssl_prefer_server_ciphers off;
# Session resumption without tickets (tickets leak forward secrecy)
ssl_session_cache shared:TLS:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# ECDH curve selection
ssl_ecdh_curve X25519:secp384r1;
Setting ssl_prefer_server_ciphers off might seem counterintuitive — I know it tripped me up the first time. But when every cipher in your list is strong, letting clients choose allows mobile devices without AES-NI hardware acceleration to select ChaCha20-Poly1305, which performs significantly better in software.
OCSP Stapling
OCSP stapling eliminates the client-side OCSP lookup, shaving off 100-300ms of latency and preventing privacy leaks to the CA's OCSP responder:
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/ssl/chain.pem;
resolver 127.0.0.53 [::1] valid=300s;
resolver_timeout 5s;
HSTS and HTTPS Redirect
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
# HSTS — force HTTPS for 2 years, include subdomains
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# ... rest of server config
}
A word of caution here: only add the preload directive after you've confirmed that all subdomains support HTTPS. Removal from browser preload lists takes months, and I've seen teams get burned by this when they forgot about that internal staging subdomain.
Verify Your TLS Configuration
# Quick local check
openssl s_client -connect localhost:443 -tls1_3 </dev/null 2>&1 | grep -E "Protocol|Cipher"
# Confirm TLS 1.1 is rejected
openssl s_client -connect localhost:443 -tls1_1 </dev/null 2>&1 | grep -i "alert"
Security HTTP Headers
HTTP response headers tell browsers to enforce security policies on the client side. Missing headers leave your users wide open to clickjacking, XSS, MIME confusion, and data exfiltration. This is one of the easiest wins in the entire hardening process.
# /etc/nginx/conf.d/security-headers.conf
# Include this file in each server block or in the http block
# Prevent MIME type sniffing
add_header X-Content-Type-Options "nosniff" always;
# Clickjacking protection
add_header X-Frame-Options "SAMEORIGIN" always;
# Referrer policy — send origin for HTTPS, nothing for HTTP
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Permissions policy — disable unused browser features
add_header Permissions-Policy "camera=(), microphone=(), geolocation=(), payment=()" always;
# Content Security Policy — adjust to your application
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self'; connect-src 'self'; frame-ancestors 'self'; base-uri 'self'; form-action 'self';" always;
# Prevent caching of sensitive responses
add_header Cache-Control "no-store, no-cache, must-revalidate" always;
add_header Pragma "no-cache" always;
Two things to watch out for here. First, add_header directives in a child block override all parent-level headers — this catches people all the time. Always use include files rather than scattering headers across blocks. Second, test your Content-Security-Policy in report-only mode first (Content-Security-Policy-Report-Only) before enforcing it. An overly restrictive CSP will break your application in ways that can be really annoying to debug.
Information Disclosure Prevention
Every piece of information your server leaks helps attackers narrow their approach. Hide version strings, restrict error page detail, and strip unnecessary headers:
# /etc/nginx/nginx.conf — inside the http block
# Hide Nginx version from headers and error pages
server_tokens off;
# Remove the Server header entirely (requires headers-more module)
# more_clear_headers Server;
# Custom error pages that reveal nothing about the backend
error_page 400 401 403 404 /custom_error.html;
error_page 500 502 503 504 /custom_error.html;
location = /custom_error.html {
internal;
root /usr/share/nginx/html;
}
Request Controls: Rate Limiting, Buffers, and Method Restrictions
Rate Limiting
Rate limiting is your primary defense against brute-force attacks, credential stuffing, and application-layer DDoS. Nginx's built-in limit_req module uses the leaky-bucket algorithm to throttle excessive requests per client IP. It's surprisingly effective for how simple it is to set up.
# /etc/nginx/conf.d/rate-limiting.conf
# General pages: 10 requests/second per IP
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# Login/auth endpoints: 1 request/second per IP (anti brute-force)
limit_req_zone $binary_remote_addr zone=auth:10m rate=1r/s;
# API endpoints: 30 requests/second per IP
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
# Connection limits — mitigate slowloris
limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
limit_conn conn_per_ip 20;
Now apply the zones in your server blocks:
location /login {
limit_req zone=auth burst=3 nodelay;
limit_req_status 429;
proxy_pass http://backend;
}
location /api/ {
limit_req zone=api burst=50 nodelay;
limit_req_status 429;
proxy_pass http://backend;
}
location / {
limit_req zone=general burst=20 nodelay;
proxy_pass http://backend;
}
The burst parameter allows short traffic spikes without dropping legitimate users, while nodelay serves burst requests immediately rather than queuing them. One detail people often miss: return HTTP 429 (Too Many Requests) instead of the default 503 so clients can actually distinguish rate limiting from server errors.
Buffer and Timeout Hardening
Oversized request buffers can enable buffer-overflow attacks. Slow connections enable slowloris-style denial of service. You need to constrain both:
# Request body and header limits
client_max_body_size 10m;
client_body_buffer_size 128k;
client_header_buffer_size 1k;
large_client_header_buffers 4 16k;
# Timeout protection against slow attacks
client_body_timeout 10s;
client_header_timeout 10s;
keepalive_timeout 15s;
send_timeout 10s;
HTTP Method Restriction
Most web applications only need GET, POST, and HEAD. Other methods — especially TRACE, DELETE, and OPTIONS — can enable cross-site tracing or unintended data modification:
location / {
limit_except GET POST HEAD {
deny all;
}
}
ModSecurity WAF with OWASP CRS v4
So, rate limiting handles volumetric abuse — but what about the clever stuff? A Web Application Firewall (WAF) inspects actual request payloads and blocks application-layer attacks like SQL injection, XSS, and path traversal. ModSecurity v3 paired with OWASP Core Rule Set v4.25.0 (the first LTS release) gives you this layer.
Install ModSecurity v3 and the Nginx Connector
# Install build dependencies (Ubuntu/Debian)
sudo apt install -y build-essential git libcurl4-openssl-dev \
libgeoip-dev liblmdb-dev libpcre2-dev libtool libxml2-dev \
libyajl-dev pkgconf wget zlib1g-dev
# Clone and build libmodsecurity
git clone --depth 1 -b v3/master https://github.com/owasp-modsecurity/ModSecurity
cd ModSecurity
git submodule init && git submodule update
./build.sh && ./configure
make -j$(nproc) && sudo make install
# Clone the Nginx connector
cd ..
git clone --depth 1 https://github.com/owasp-modsecurity/ModSecurity-nginx
# Rebuild Nginx with the connector as a dynamic module
# (use the same configure flags from your installed nginx -V output)
nginx_version=$(nginx -v 2>&1 | grep -oP '[\d.]+$')
wget http://nginx.org/download/nginx-${nginx_version}.tar.gz
tar xzf nginx-${nginx_version}.tar.gz
cd nginx-${nginx_version}
./configure --with-compat --add-dynamic-module=../ModSecurity-nginx
make modules
sudo cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules/
Deploy OWASP CRS v4
# Clone the rule set
sudo git clone -b v4.25.0 https://github.com/coreruleset/coreruleset \
/etc/nginx/modsec/owasp-crs
# Set up configuration
cd /etc/nginx/modsec/owasp-crs
sudo cp crs-setup.conf.example crs-setup.conf
# Copy the recommended ModSecurity configuration
sudo cp /path/to/ModSecurity/modsecurity.conf-recommended \
/etc/nginx/modsec/modsecurity.conf
Configure ModSecurity in Nginx
# /etc/nginx/nginx.conf — at the top level
load_module modules/ngx_http_modsecurity_module.so;
http {
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
# ... other directives
}
Create the main rules include file:
# /etc/nginx/modsec/main.conf
Include /etc/nginx/modsec/modsecurity.conf
Include /etc/nginx/modsec/owasp-crs/crs-setup.conf
Include /etc/nginx/modsec/owasp-crs/rules/*.conf
Start in Detection Mode, Then Enforce
This is honestly the step most people skip, and it's the one that'll save you from a 3 AM pager alert. Edit /etc/nginx/modsec/modsecurity.conf and start with detection-only mode:
# Phase 1: Monitor — log but don't block
SecRuleEngine DetectionOnly
# Audit logging for investigation
SecAuditEngine RelevantOnly
SecAuditLog /var/log/nginx/modsec_audit.log
SecAuditLogFormat JSON
Run in detection mode for 1-2 weeks while keeping an eye on /var/log/nginx/modsec_audit.log for false positives. Create targeted rule exclusions for legitimate requests, then switch to enforcing:
# Phase 2: Enforce — block matching requests
SecRuleEngine On
Tune Paranoia Levels and Anomaly Thresholds
CRS v4 uses anomaly scoring — each rule match adds points, and requests exceeding the threshold get blocked. Edit crs-setup.conf:
# Paranoia Level: 1 (default) to 4 (maximum)
# Start at 1, increase gradually
SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.blocking_paranoia_level=1"
# Anomaly threshold: lower = stricter
# Default: 5 for inbound, 4 for outbound
SecAction "id:900110,phase:1,nolog,pass,t:none,setvar:tx.inbound_anomaly_score_threshold=5"
Test the WAF
# SQL injection test — should return 403
curl -s -o /dev/null -w "%{http_code}" \
"https://your-server.example.com/?id=1%20UNION%20SELECT%20*%20FROM%20users"
# XSS test — should return 403
curl -s -o /dev/null -w "%{http_code}" \
"https://your-server.example.com/?q=<script>alert(1)</script>"
# Path traversal test — should return 403
curl -s -o /dev/null -w "%{http_code}" \
"https://your-server.example.com/?file=../../../../etc/passwd"
If all three return 403, you're in good shape. If not, check your ModSecurity logs to see what's happening.
Maintain the Rule Set
# Monthly CRS update
cd /etc/nginx/modsec/owasp-crs
sudo git fetch --tags
sudo git checkout v4.25.0 # or the latest stable tag
sudo nginx -t && sudo systemctl reload nginx
Systemd Sandboxing for Nginx
Even with a perfectly hardened Nginx configuration, a zero-day vulnerability could allow arbitrary code execution. That's where systemd sandboxing comes in — it contains the blast radius by restricting what the Nginx process can actually do at the kernel level. We're talking filesystem paths, system calls, capabilities, and network interfaces.
Create a Hardened Override
# Create the override directory
sudo mkdir -p /etc/systemd/system/nginx.service.d
# Write the hardening drop-in
sudo tee /etc/systemd/system/nginx.service.d/hardening.conf <<'EOF'
[Service]
# Prevent privilege escalation
NoNewPrivileges=yes
# Filesystem isolation
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
PrivateDevices=yes
# Grant write access only where Nginx needs it
ReadWritePaths=/var/log/nginx /var/cache/nginx /run/nginx.pid
ReadWritePaths=-/var/log/modsec
# Kernel and system protection
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectKernelLogs=yes
ProtectControlGroups=yes
ProtectClock=yes
ProtectProc=invisible
LockPersonality=yes
# Restrict Linux capabilities to the minimum
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETUID CAP_SETGID CAP_CHOWN CAP_DAC_OVERRIDE
AmbientCapabilities=CAP_NET_BIND_SERVICE
# Network restrictions — only IPv4, IPv6, and Unix sockets
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Namespace and realtime restrictions
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
# Memory execution protection
MemoryDenyWriteExecute=yes
# System call filtering
SystemCallArchitectures=native
SystemCallFilter=@system-service
SystemCallFilter=~@mount @reboot @swap @debug @obsolete
# Resource limits
TasksMax=512
MemoryHigh=512M
MemoryMax=768M
EOF
# Apply changes
sudo systemctl daemon-reload
sudo systemctl restart nginx
Verify the Sandbox
# Re-run the security analysis — target score below 3.0
systemd-analyze security nginx
# Confirm Nginx is running correctly
curl -I https://localhost
journalctl -u nginx --since "5 minutes ago" --no-pager
If Nginx fails to start after applying the sandbox, don't panic. Check the journal for denied syscalls or file access errors and adjust ReadWritePaths or SystemCallFilter accordingly. The key is to add exceptions incrementally rather than ripping out entire hardening directives — that defeats the purpose.
Structured Logging for Security Monitoring
Plain-text access logs are a pain to parse at scale. JSON-formatted logs, on the other hand, integrate directly with SIEM platforms like the Elastic Stack, Splunk, or Wazuh, enabling real-time threat detection and correlation. If you're not doing this already, it's a game-changer for incident response.
# /etc/nginx/conf.d/logging.conf
log_format json_security escape=json
'{"@timestamp":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"request_method":"$request_method",'
'"request_uri":"$request_uri",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"request_time":$request_time,'
'"upstream_response_time":"$upstream_response_time",'
'"ssl_protocol":"$ssl_protocol",'
'"ssl_cipher":"$ssl_cipher",'
'"request_length":$request_length,'
'"connection":$connection,'
'"limit_req_status":"$limit_req_status"}';
access_log /var/log/nginx/access.json json_security;
error_log /var/log/nginx/error.log warn;
This log format captures the TLS protocol and cipher for each request, which makes it trivially easy to spot clients still using weak protocols. The limit_req_status field flags rate-limited requests, and request_time helps you identify anomalously slow requests that might indicate an attack in progress.
Automated Verification Checklist
Hardening isn't a one-and-done thing. Configs drift, packages get updated, and someone inevitably makes a "quick change" that undoes your work. Automate periodic checks to catch this before attackers do:
#!/bin/bash
# /usr/local/bin/nginx-security-audit.sh
set -euo pipefail
echo "=== Nginx Security Audit ==="
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
echo ""
# 1. Version check
echo "[*] Nginx version:"
nginx -v 2>&1
# 2. Configuration syntax
echo "[*] Config test:"
nginx -t 2>&1
# 3. Systemd security score
echo "[*] Systemd security score:"
systemd-analyze security nginx 2>&1 | tail -1
# 4. TLS verification
echo "[*] TLS protocols supported:"
nmap --script ssl-enum-ciphers -p 443 localhost 2>/dev/null | grep -E "TLSv|SSLv"
# 5. Security headers
echo "[*] Security headers:"
curl -sI https://localhost 2>/dev/null | grep -iE "strict-transport|x-frame|x-content-type|content-security|referrer-policy"
# 6. ModSecurity status
echo "[*] ModSecurity test (SQL injection):"
status=$(curl -s -o /dev/null -w "%{http_code}" "https://localhost/?id=1%20UNION%20SELECT%20*")
if [ "$status" = "403" ]; then
echo " PASS: SQLi blocked (HTTP $status)"
else
echo " FAIL: SQLi not blocked (HTTP $status)"
fi
echo ""
echo "=== Audit Complete ==="
Schedule this script with a cron job or systemd timer and pipe the output to your monitoring platform. Treat any regression as a high-priority alert — seriously, don't let it sit in a queue.
Putting It All Together
Security hardening works best as a layered strategy where each control compensates for potential failures in the others. Here's the recommended implementation order:
- Update Nginx to the latest stable release (1.28.3) to close known CVEs
- Enable TLS 1.3 with strong ciphers and HSTS to encrypt all traffic
- Add security headers to instruct browsers to enforce client-side protections
- Configure rate limiting to throttle brute-force and DDoS attempts
- Deploy ModSecurity + OWASP CRS in detection mode, tune for your app, then enforce
- Apply systemd sandboxing to contain the blast radius of any compromise
- Enable JSON logging and forward to a SIEM for real-time monitoring
- Automate auditing to detect configuration drift before attackers do
Each layer addresses a distinct attack class: TLS prevents eavesdropping, headers prevent client-side attacks, rate limiting prevents volumetric abuse, the WAF blocks application-layer exploitation, sandboxing limits post-exploitation lateral movement, and monitoring makes sure you catch what slips through. No single layer is foolproof — that's the whole point of defense in depth.
Frequently Asked Questions
How do I check if my Nginx server is properly hardened?
Run systemd-analyze security nginx for a sandbox score (aim below 3.0), use nmap --script ssl-enum-ciphers to verify only TLS 1.2/1.3 are accepted, and check security headers with curl -sI https://your-server. For automated ongoing audits, use the verification script from this guide on a weekly cron schedule and forward results to your monitoring platform.
Does ModSecurity slow down Nginx?
At paranoia level 1 with the OWASP CRS, ModSecurity v3 adds roughly 1-3ms of latency per request. That's barely noticeable. At higher paranoia levels with complex rule sets, overhead can increase to 5-10ms — still negligible compared to most backend processing times. If performance is critical, start at paranoia level 1 and only bump it up where your threat model demands it. You can also selectively disable ModSecurity on high-throughput, low-risk endpoints like static asset paths.
Should I use TLS 1.2 or TLS 1.3 only?
Keep both TLS 1.2 and TLS 1.3 enabled for now. TLS 1.3 gives you better performance (one fewer round trip) and stronger security (no legacy cipher negotiation), but some older clients and corporate proxies still need TLS 1.2. Definitely remove TLS 1.0 and 1.1 though — they have known vulnerabilities and all modern browsers have dropped support. If you know your users are exclusively on modern browsers and devices, going TLS 1.3 only is the most secure option.
How do I handle ModSecurity false positives without disabling the WAF?
Whatever you do, never set SecRuleEngine Off to deal with false positives. Instead, find the specific rule ID causing the issue from the audit log, then create a targeted exclusion in a separate file loaded after the CRS rules. Use SecRuleRemoveById for broad exclusions or SecRuleUpdateTargetById to exclude specific parameters from a rule. Always document why each exclusion exists and review them quarterly — you'd be surprised how many "temporary" exclusions become permanent.
What is the difference between rate limiting in Nginx vs. fail2ban?
They work at different levels, and honestly you should use both. Nginx rate limiting operates at the application layer inside the Nginx process, using the leaky-bucket algorithm to throttle requests per client IP in real time. Fail2ban operates at the network layer by scanning log files for patterns and dynamically adding firewall rules (iptables/nftables) to block offending IPs entirely. Use Nginx rate limiting for immediate, granular request throttling, and fail2ban for banning repeat offenders at the firewall level before they even reach Nginx.