Systemd Service Hardening: Sandbox and Secure Every Linux Daemon

Learn how to audit, sandbox, and harden every systemd service on your Linux servers. Covers 40+ security directives with tested override files for Nginx, PostgreSQL, and Redis, encrypted secrets with systemd-creds, and the latest features in systemd v256/v257.

Why Systemd Service Hardening Matters in 2026

Here's a fun exercise: go run systemd-analyze security on any of your production servers right now. I'll wait. If you're like most admins I've worked with, you'll see a wall of "UNSAFE" ratings staring back at you — and honestly, it's a bit sobering.

Every daemon running on a Linux server is a potential entry point for attackers. A vulnerability in Nginx, PostgreSQL, Redis, or even some minor background service can be leveraged into full system compromise — unless you've got sandboxing controls in place. Systemd ships with over 40 security directives that can restrict file system access, filter system calls, isolate network namespaces, and drop privileges. All without touching a single line of application code.

And yet, most services still run with default unit files that score 9.0+ out of 10 on the systemd exposure scale (where 10 is the worst). Why? Vendor-packaged unit files prioritize compatibility over security, leaving the hardening work to you.

This guide walks you through the entire hardening process — from auditing your current exposure to building production-ready override files for Nginx, PostgreSQL, and Redis. We'll cover every major directive category, the new stuff in systemd v256 and v257, encrypted credential management with systemd-creds, and automated profiling tools that can generate hardening configs for you.

Auditing Your Services with systemd-analyze security

Before you harden anything, you need a baseline. The systemd-analyze security command audits every loaded service unit and assigns an exposure score from 0.0 (fully sandboxed) to 10.0 (fully exposed).

Scanning All Services

# Audit all loaded services
systemd-analyze security

# Example output:
# UNIT                      EXPOSURE PREDICATE HAPPY
# atd.service                    9.6 UNSAFE    😨
# crond.service                  9.6 UNSAFE    😨
# httpd.service                  9.2 UNSAFE    😨
# nginx.service                  9.6 UNSAFE    😨
# postgresql.service             9.6 UNSAFE    😨
# redis.service                  9.6 UNSAFE    😨
# sshd.service                   9.6 UNSAFE    😨

Yeah, that's a lot of frowning faces. The labels map to score ranges: OK (0–2), MEDIUM (2–5), EXPOSED (5–7), and UNSAFE (7–10). Most stock service units land squarely in the UNSAFE range because they apply zero sandboxing directives.

Deep-Diving into a Specific Service

To see exactly which controls are missing for a given service, pass its name:

# Detailed security audit for nginx
systemd-analyze security nginx.service

The output lists every security-relevant directive, shows whether it's enabled (✓) or disabled (✗), provides a plain-language description, and assigns an exposure weight. Focus on the directives with the highest exposure values first — those give you the most security bang for your buck.

Important caveat: The exposure score only reflects systemd-level sandboxing. It doesn't account for SELinux, AppArmor, or application-level security controls. A service might score 8.0 but still be well-protected by a strict SELinux policy. Use the score as a prioritization guide, not an absolute risk assessment.

Core Hardening Directives: A Systematic Walkthrough

Systemd security directives fall into six categories. Let's walk through each one with explanations, recommended values, and the gotchas that can break services if you apply them blindly.

1. File System Isolation

File system directives control what a service can read, write, and execute on disk. These are your highest-impact settings because most exploits need file system access to persist or escalate.

[Service]
# Mount entire filesystem read-only except /dev, /proc, /sys
ProtectSystem=strict

# Make /home, /root, /run/user inaccessible
ProtectHome=true

# Give each service its own /tmp and /var/tmp
PrivateTmp=yes

# Explicitly grant write access only where needed
ReadWritePaths=/var/log/myapp /var/lib/myapp

# Make specific paths completely invisible
InaccessiblePaths=-/etc/letsencrypt -/root/.ssh

# Restrict filesystem types the service can see
RestrictFileSystems=ext4 tmpfs proc

How ProtectSystem works: The =strict value mounts everything read-only using bind mounts and mount namespaces. You then selectively open paths with ReadWritePaths=. The =full value is less restrictive — it only protects /boot, /etc, and /usr. Always prefer strict and whitelist specific writable paths.

Common gotcha: When ProtectSystem=strict breaks a service (and it probably will on the first try), check the journal for EROFS (read-only file system) errors. These tell you exactly which path needs to be added to ReadWritePaths=. It's a bit tedious, but straightforward once you know the pattern.

2. Privilege and Capability Management

These directives make sure services run with the bare minimum permissions they actually need.

[Service]
# Prevent all privilege escalation through execve()
NoNewPrivileges=yes

# Run as a dedicated non-root user
User=myapp
Group=myapp

# Or use DynamicUser for services that need no persistent state
DynamicUser=yes

# Drop all capabilities, then add back only what is needed
CapabilityBoundingSet=CAP_NET_BIND_SERVICE

# Grant capabilities to non-root users (instead of running as root)
AmbientCapabilities=CAP_NET_BIND_SERVICE

# Prevent SUID/SGID bit usage
RestrictSUIDSGID=yes

DynamicUser explained: When DynamicUser=yes is set, systemd automatically allocates a transient UID/GID at service start and releases it on stop. This implies PrivateTmp=yes, RemoveIPC=yes, and ProtectSystem=strict. It's ideal for stateless services like API gateways, but incompatible with services that write to persistent paths outside of StateDirectory=, CacheDirectory=, or LogsDirectory=.

AmbientCapabilities vs. running as root: So many services run as root only because they need to bind to port 80 or 443. That's it. Instead, run as a non-root user and grant just CAP_NET_BIND_SERVICE. You eliminate the root attack surface entirely while keeping the one specific capability the service needs. This is one of those changes that feels almost too easy for how much security it buys you.

3. Kernel and System Protection

[Service]
# Prevent kernel module loading/unloading
ProtectKernelModules=yes

# Make /proc and /sys tunables read-only
ProtectKernelTunables=yes

# Prevent cgroup modifications
ProtectControlGroups=yes

# Block access to kernel log buffer
ProtectKernelLogs=yes

# Prevent hostname changes
ProtectHostname=yes

# Prevent hardware clock modifications
ProtectClock=yes

# Lock the execution domain (personality)
LockPersonality=yes

Good news: these directives are safe to enable for the vast majority of services. Just slap them all on. The only exceptions are services that explicitly interact with kernel modules (like network capture tools needing custom modules) or container runtimes that manage cgroups.

4. Network Restriction

Systemd gives you multiple layers of network control, from namespace isolation to IP-level filtering.

[Service]
# Complete network isolation (loopback only)
# Only for services that do NOT need network access
PrivateNetwork=yes

# Restrict socket address families
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6

# IP-level access control (requires BPF support)
IPAddressAllow=127.0.0.1/8 ::1/128 10.0.0.0/8
IPAddressDeny=any

# Bind to a specific network interface (systemd v257+)
BindNetworkInterface=eth0

# Restrict to specific network interfaces
RestrictNetworkInterfaces=eth0 lo

PrivateNetwork vs. IPAddressDeny: Use PrivateNetwork=yes for services that genuinely need no network access at all (batch processors, file converters, that sort of thing). For services that need network but should be restricted to specific peers, use IPAddressAllow/IPAddressDeny instead.

New in systemd v257: The BindNetworkInterface= directive automatically binds all sockets created by the unit to a specific network interface. This is particularly handy for VRF (Virtual Routing and Forwarding) setups where services must be pinned to a specific routing domain.

5. System Call Filtering

System call filtering uses the kernel's seccomp-BPF mechanism to restrict which syscalls a service may invoke. This is one of the most powerful hardening layers — but also the easiest to misconfigure.

[Service]
# Allow only system-service syscalls (covers most daemons)
SystemCallFilter=@system-service

# Explicitly deny dangerous syscall groups
SystemCallFilter=~@mount @clock @reboot @swap @debug @obsolete

# Force native architecture only (prevents 32-bit compat exploits)
SystemCallArchitectures=native

# Block W+X memory mappings (prevents simple shellcode injection)
MemoryDenyWriteExecute=yes

The @system-service shortcut: Rather than listing hundreds of individual syscalls, the @system-service group permits a curated set of system calls typically needed by well-behaved daemons. Start here and only add exceptions if something breaks.

Finding missing syscalls with strace: If a service dies after you apply filters, use strace to figure out what got blocked:

# Trace all syscalls made by the service
strace -f -o /tmp/myservice-syscalls.log /usr/bin/myservice

# Extract unique syscall names
awk -F'(' '{print $1}' /tmp/myservice-syscalls.log | sort -u

6. Namespace and Resource Restrictions

[Service]
# Deny creation of new namespaces (blocks container escapes)
RestrictNamespaces=yes

# Prevent real-time scheduling abuse
RestrictRealtime=yes

# Use a private IPC namespace
PrivateIPC=yes

# Use a private user namespace
PrivateUsers=yes

# Use a private key ring
KeyringMode=private

# Hide other processes in /proc
ProtectProc=invisible
ProcSubset=pid

PrivateUsers caution: When PrivateUsers=yes is set, the service runs with zero capabilities in the host user namespace. This will break services that need to manipulate host resources (bind mounts, raw sockets). Test thoroughly before enabling. On the bright side, systemd v257 introduces PrivateUsers=managed, which dynamically allocates a range of 65,536 UIDs/GIDs for the unit — offering better isolation with fewer compatibility headaches.

Real-World Hardening Profiles

Theory is great, but you're here for configs that actually work in production. Below are hardening overrides for three of the most commonly deployed Linux services, tested and annotated with the quirks I've run into.

Nginx Web Server

Create the override with systemctl edit nginx.service or write directly to /etc/systemd/system/nginx.service.d/hardening.conf:

[Service]
# File system isolation
ProtectSystem=strict
ProtectHome=true
PrivateTmp=yes
ReadWritePaths=/var/log/nginx /run/nginx
ReadOnlyPaths=/etc/nginx /etc/ssl /usr/share/nginx

# Privilege management
NoNewPrivileges=yes
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_DAC_OVERRIDE CAP_SETUID CAP_SETGID
AmbientCapabilities=

# Kernel protection
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectKernelLogs=yes
ProtectHostname=yes
ProtectClock=yes
LockPersonality=yes

# Device isolation
PrivateDevices=yes
DevicePolicy=closed

# Network (nginx needs full network access)
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6

# System calls
SystemCallFilter=@system-service
SystemCallFilter=~@mount @reboot @swap @debug @obsolete
SystemCallArchitectures=native
MemoryDenyWriteExecute=yes

# Namespace restrictions
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes

# Misc
PrivateIPC=yes
KeyringMode=private
UMask=0027

This configuration typically drops Nginx from 9.6 UNSAFE to approximately 2.5 OK. That's a massive improvement for what amounts to a single config file. The main capabilities retained (CAP_NET_BIND_SERVICE, CAP_DAC_OVERRIDE, CAP_SETUID, CAP_SETGID) are required because Nginx's master process starts as root to bind privileged ports and then drops to the nginx worker user.

PostgreSQL Database Server

[Service]
# File system isolation
ProtectSystem=strict
ProtectHome=true
PrivateTmp=yes
ReadWritePaths=/var/lib/postgresql /var/run/postgresql /var/log/postgresql

# Privilege management
NoNewPrivileges=yes
CapabilityBoundingSet=
PrivateUsers=no

# Kernel protection
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectKernelLogs=yes
ProtectHostname=yes
ProtectClock=yes
LockPersonality=yes

# Device isolation
PrivateDevices=yes
DevicePolicy=closed

# Network
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 AF_NETLINK

# System calls — PostgreSQL needs shared memory syscalls
SystemCallFilter=@system-service
SystemCallFilter=~@mount @reboot @swap @debug @obsolete
SystemCallArchitectures=native
# Note: Do NOT set MemoryDenyWriteExecute=yes for PostgreSQL
# PostgreSQL uses JIT compilation which requires W+X memory

# Namespace restrictions
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes

# Misc
PrivateIPC=no  # PostgreSQL uses System V shared memory
KeyringMode=private
UMask=0077

Critical PostgreSQL notes: Whatever you do, don't enable MemoryDenyWriteExecute=yes if PostgreSQL JIT compilation is active (and it is by default since PostgreSQL 12). JIT requires writable-executable memory mappings, and setting this directive will crash your database. Also keep PrivateIPC=no — PostgreSQL uses System V shared memory for inter-process communication between its backend processes, and cutting that off means cutting off PostgreSQL itself.

Redis In-Memory Data Store

[Service]
# File system isolation
ProtectSystem=strict
ProtectHome=true
PrivateTmp=yes
ReadWritePaths=/var/lib/redis /var/log/redis /run/redis

# Privilege management
NoNewPrivileges=yes
CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_SYS_RESOURCE
PrivateUsers=no

# Kernel protection
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectKernelLogs=yes
ProtectHostname=yes
ProtectClock=yes
LockPersonality=yes

# Device isolation
PrivateDevices=yes
DevicePolicy=closed

# Network
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6

# System calls
SystemCallFilter=@system-service
SystemCallFilter=~@mount @reboot @swap @debug @obsolete
SystemCallArchitectures=native
MemoryDenyWriteExecute=no  # Redis uses memory-mapped files

# Namespace restrictions
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes

# Misc
PrivateIPC=yes
KeyringMode=private
UMask=0077

Redis note: CAP_SYS_RESOURCE is needed because Redis likes to adjust vm.overcommit_memory and other kernel parameters at startup. If you've already set these system-wide in /etc/sysctl.conf, you can safely drop this capability. One less thing to worry about.

Secrets Management with systemd-creds

Let's talk about one of my favorite newer systemd features. If you're still storing passwords and API keys in environment variables or (worse) plaintext files, it's time to stop. Since systemd v250, the systemd-creds utility provides native encrypted credential management, optionally backed by your system's TPM2 chip.

How systemd-creds Encryption Works

Credentials are encrypted with AES256-GCM using one of three key sources:

  • TPM2-only: A key derived from the system's TPM2 chip. Credentials can only be decrypted on the original hardware.
  • Host-only: A key stored at /var/lib/systemd/credential.secret, accessible only to root.
  • Combined (default): Both TPM2 and host key. Decryption requires the original hardware AND the OS installation.

Encrypting and Using Credentials

# Check TPM2 availability
systemd-analyze has-tpm2

# Encrypt a database password
echo -n "MySecretDBPassword" | systemd-creds encrypt --name=db-password - /etc/credstore/db-password.cred

# Encrypt with host key only (no TPM2 required)
echo -n "MyAPIKey" | systemd-creds encrypt --name=api-key --with-key=host - /etc/credstore/api-key.cred

# Verify the encrypted credential
systemd-creds decrypt --name=db-password /etc/credstore/db-password.cred -

Loading Credentials in Service Units

[Service]
# Load encrypted credentials — automatically decrypted at service start
LoadCredentialEncrypted=db-password:/etc/credstore/db-password.cred
LoadCredentialEncrypted=api-key:/etc/credstore/api-key.cred

# Credentials appear as files in /run/credentials/<unit>/
# Your application reads them from:
#   /run/credentials/myapp.service/db-password
#   /run/credentials/myapp.service/api-key

# Or use the $CREDENTIALS_DIRECTORY environment variable
ExecStart=/usr/bin/myapp --db-password-file $${CREDENTIALS_DIRECTORY}/db-password

The beauty of this approach: encrypted credentials are stored at rest in ciphertext and only decrypted into non-swappable ramfs memory when the service starts. They're inaccessible to other services and automatically cleaned up on stop. This is miles ahead of EnvironmentFile= with plaintext secrets sitting on disk.

Automated Hardening with SHH and Runtime Profiling

Manually adding directives one by one works, but it gets tedious fast — especially when you've got dozens of services to lock down. Fortunately, there are tools that can speed things up considerably.

SHH (Systemd Hardening Helper)

SHH, developed by security firm Synacktiv, is a Rust-based tool that profiles a running service and automatically generates hardening directives based on the system calls, file paths, and capabilities actually used at runtime.

# Install SHH (requires Rust toolchain)
cargo install shh

# Profile a service for 60 seconds of representative workload
sudo shh trace --service nginx.service --duration 60

# Generate a hardening override file
sudo shh generate --service nginx.service --output /etc/systemd/system/nginx.service.d/hardening.conf

# Reload and verify
systemctl daemon-reload
systemd-analyze security nginx.service

A word of caution: SHH can only observe what happens during the profiling window. If your service has code paths that only trigger during failover, monthly reports, or specific edge cases, SHH won't catch those. Treat its output as a strong starting point that you review and refine — not a fire-and-forget solution.

The Iterative strace Approach

For services where SHH isn't available (or when you want to understand things at a deeper level), here's the manual workflow that's served me well:

  1. Apply a restrictive baseline override with all major directives enabled.
  2. Restart the service and check journalctl -u myservice for errors.
  3. For EROFS errors, add the path to ReadWritePaths=.
  4. For EPERM or Operation not permitted, check if a capability or syscall filter is too restrictive.
  5. For SIGSYS signals, a blocked system call killed the process — use strace to identify which one.
  6. Re-run systemd-analyze security to verify your score improved.
  7. Repeat until the service runs correctly with the lowest exposure score you can achieve.

It's methodical, maybe a little boring, but it works every time.

New in systemd v256 and v257

Recent systemd releases have introduced several security-relevant features worth knowing about.

run0: The Safer sudo Alternative (v256)

Systemd v256 introduced run0, a privilege elevation tool that aims to replace sudo. Unlike sudo, which uses SUID binaries to gain privileges, run0 asks the service manager to start a transient unit under the target user's UID. Privileges are dropped rather than gained, and the process runs in an isolated environment without inheriting the caller's potentially compromised context.

# Use run0 instead of sudo
run0 systemctl restart nginx

# run0 tints the terminal background red when running as root
# Controlled via --background= or $SYSTEMD_TINT_BACKGROUND

The red terminal tint is a nice touch, honestly — it's a constant visual reminder that you're operating with elevated privileges.

PrivateUsers=managed (v257)

The new managed value for PrivateUsers= dynamically allocates a transient range of 65,536 UIDs/GIDs via systemd-nsresourced. This provides user namespace isolation without the compatibility issues of PrivateUsers=yes, which maps everything to the nobody user. It's a significant improvement for anyone who's been burned by the old behavior.

BindNetworkInterface= (v257)

This new directive binds all sockets created by a service to a specific network interface. Useful for multi-homed servers and VRF environments where network traffic must be segregated.

Legacy Removals

Heads up: systemd v256 deprecated cgroup v1 support, and v257 removed iptables-based NAT in favor of nftables-only. TPM 1.2 support was also removed. If your hardening workflows depend on any of these, you'll need to migrate sooner rather than later.

Applying Overrides Without Breaking Packages

This is important enough that it deserves its own section: never edit vendor unit files in /usr/lib/systemd/system/ directly. Package updates will overwrite your changes, and you'll be left wondering why your hardening disappeared after a routine dnf update. Use the drop-in override mechanism instead.

Using systemctl edit

# Create an override file interactively
systemctl edit nginx.service
# This opens your editor with a blank file at:
# /etc/systemd/system/nginx.service.d/override.conf

# After saving, reload and restart
systemctl daemon-reload
systemctl restart nginx.service

# Verify the override is applied
systemctl cat nginx.service

Manual Override Files

# Create the drop-in directory
mkdir -p /etc/systemd/system/nginx.service.d/

# Write the override file
cat > /etc/systemd/system/nginx.service.d/99-hardening.conf << 'EOF'
[Service]
ProtectSystem=strict
ProtectHome=true
PrivateTmp=yes
NoNewPrivileges=yes
PrivateDevices=yes
ProtectKernelModules=yes
ProtectKernelTunables=yes
EOF

# Reload systemd and restart the service
systemctl daemon-reload
systemctl restart nginx.service

The naming convention 99-hardening.conf ensures your override loads last and takes precedence over any other drop-in files. Override files in /etc/systemd/system/ always take priority over vendor files in /usr/lib/systemd/system/.

Verifying and Monitoring Hardened Services

Hardening isn't a one-and-done configuration. Services update, workloads change, and new directives become available. You need verification baked into your operational workflow.

Automated Scoring with Scripts

#!/bin/bash
# hardening-audit.sh — Alert on services with exposure score above threshold
THRESHOLD=5.0

systemd-analyze security --no-pager 2>/dev/null | \
  awk -v threshold="$THRESHOLD" '
    NR > 1 && $2+0 > threshold {
      printf "WARNING: %-40s score=%-4s (%s)\n", $1, $2, $3
    }
  '

Throw that in a cron job or a systemd timer (how fitting) and you'll catch regressions before they become problems.

Integration with Compliance Frameworks

CIS Benchmarks for major distributions now include systemd hardening checks. If you're already using OpenSCAP for compliance scanning, you can extend your SCAP profiles to validate systemd service configurations against your organizational baseline.

Runtime Monitoring with Auditd

Pair systemd sandboxing with auditd rules to detect bypass attempts:

# /etc/audit/rules.d/systemd-hardening.rules
# Alert when a sandboxed service attempts to access restricted paths
-w /etc/shadow -p r -k shadow-access
-w /root/.ssh -p rwa -k ssh-key-access
-a always,exit -F arch=b64 -S mount -k mount-attempt

Defense in depth at its finest — even if someone somehow bypasses a systemd sandbox, auditd will flag the suspicious activity.

A Systematic Hardening Workflow

Here's the step-by-step process I use for every service I harden:

  1. Audit: Run systemd-analyze security myservice.service and record the baseline score.
  2. Identify: Sort the output by exposure weight. Focus on directives with the highest impact first.
  3. Apply: Create a drop-in override with systemctl edit myservice.service. Start with the safe universal directives (NoNewPrivileges, ProtectSystem, ProtectHome, PrivateTmp, PrivateDevices).
  4. Test: Restart the service and exercise its full functionality. Check journalctl -u myservice for errors.
  5. Iterate: Add more restrictive directives (system call filters, capability dropping, network restrictions). Test after each addition.
  6. Verify: Re-run systemd-analyze security to confirm the score improved.
  7. Document: Record the final configuration and any service-specific exceptions in your change management system.
  8. Monitor: Schedule periodic re-audits to catch regressions from package updates that might reset unit files.

It might look like a lot of steps, but after you've done it two or three times, the whole process becomes second nature.

Frequently Asked Questions

Does systemd-analyze security account for SELinux or AppArmor?

No. The exposure score is based entirely on systemd-level sandboxing directives. It doesn't evaluate SELinux policies, AppArmor profiles, or application-level security measures. A service can score poorly on the systemd scale but still be well-protected by a strict MAC policy. Use the systemd score alongside — not as a replacement for — your MAC security assessment.

Why are most services scored as UNSAFE by default?

Vendor-packaged unit files prioritize compatibility across the widest range of configurations. Enabling sandboxing directives can break services that depend on specific file paths, system calls, or capabilities. Distributors leave the hardening to administrators who understand their specific deployment context. That's exactly why drop-in overrides exist — they let you layer security on top of vendor defaults without modifying the original unit file.

Can systemd hardening replace running services in containers?

They're complementary, not interchangeable. Systemd sandboxing uses the same underlying kernel mechanisms (namespaces, seccomp, cgroups) but operates at the service level rather than providing a full container image with its own filesystem. For services already installed on the host, systemd hardening is simpler and has lower overhead. For microservice architectures or multi-tenant environments, containers provide stronger isolation boundaries. Ideally, you'd do both — containerized services should also apply systemd hardening to the container runtime itself.

Will systemd hardening break my service during package updates?

Not if you use drop-in override files (via systemctl edit) rather than modifying vendor unit files directly. Override files in /etc/systemd/system/ survive package updates because they're separate from the vendor unit files in /usr/lib/systemd/system/. That said, a major application update might introduce new functionality that requires additional system calls or file paths — so always test services after upgrades and check the journal for new errors.

How do I harden a custom systemd service I wrote myself?

For custom services, embed the hardening directives directly in the unit file rather than using overrides — you control the unit file, so there's no risk of package updates overwriting it. Start with the restrictive baseline from this guide, run systemd-analyze security, and iteratively relax directives only where the service demonstrably needs them. Use strace to identify required system calls and check the journal for permission errors to figure out which paths need write access.

About the Author Editorial Team

Our team of expert writers and editors.