Linux Secrets Management: Practical Guide to Vault, SOPS, systemd-creds, and Secret Scanning

Learn to manage secrets on Linux with Vault, SOPS, Age encryption, systemd-creds, and automated scanning tools like Gitleaks and TruffleHog. Covers command-line hygiene, dynamic credentials, CI/CD integration, and working examples you can deploy today.

Why Secrets Management on Linux Deserves Its Own Strategy

You lock down SSH with certificate authentication, enforce SELinux policies, harden your firewall with nftables, and audit your services with systemd-analyze security. Then you run a quick grep -rn "password\|api_key\|secret" /etc /opt /srv and find hardcoded credentials scattered across config files, shell scripts, and environment variable exports. Sound familiar? Yeah, it's more common than most of us want to admit.

Secrets — API keys, database passwords, TLS private keys, tokens, encryption keys — are the lifeblood of modern infrastructure. A single exposed credential can undo months of careful hardening. And in 2026, with AI-assisted reconnaissance compressing exploitation timelines from weeks to hours, treating secrets management as an afterthought isn't just sloppy — it's genuinely dangerous.

This guide walks through the entire Linux secrets management stack: from basic command-line hygiene that keeps credentials out of shell history, through file-level encryption with SOPS and Age, to centralized dynamic secrets with HashiCorp Vault, secure service injection via systemd-creds, and automated secret scanning in CI/CD pipelines. Every section includes working examples you can deploy today.

Command-Line Secrets Hygiene

Before reaching for any fancy tool, fix the fundamentals. The Linux command line is full of places where secrets silently leak — shell history files, process listings, environment exports, log files. These are the low-hanging fruit that attackers (and automated scanners) look for first.

Shell History Exposure

Every time you type a command containing a password or token, it gets written to ~/.bash_history or ~/.zsh_history. Anyone with read access to your home directory — or any backup system that captures it — now has those credentials. I've personally seen production database passwords sitting in a backup of someone's home directory that was world-readable. Not fun.

# BAD: Password stored permanently in shell history
mysql -u admin -pSuperSecret123 production_db

# BETTER: Prompt for password interactively
mysql -u admin -p production_db

# Prevent history recording for the current session
export HISTCONTROL=ignorespace
# Prefix sensitive commands with a space (Bash only)
 export API_KEY="sk-live-abc123"

# Or disable history entirely for a session
unset HISTFILE

For persistent protection, configure your shell to exclude dangerous patterns. Add this to ~/.bashrc:

# Ignore commands starting with space and duplicates
export HISTCONTROL=ignoreboth

# Exclude common secret-related commands from history
export HISTIGNORE="*password*:*secret*:*token*:*api_key*:export *KEY*:export *SECRET*:export *PASSWORD*"

Process Listing Leaks

Command-line arguments are visible to every user on the system through /proc/[pid]/cmdline and the ps command. So never, ever pass secrets as command-line arguments.

# BAD: Any user can see this with ps aux
curl -H "Authorization: Bearer sk-live-abc123" https://api.example.com/data

# BETTER: Read from a credential file
curl -H "Authorization: Bearer $(cat /run/secrets/api-token)" https://api.example.com/data

# BEST: Use a netrc file or config file with restricted permissions
curl --netrc-file /etc/myapp/.netrc https://api.example.com/data

Harden /proc so users can't see each other's processes by mounting it with hidepid:

# Add to /etc/fstab
proc    /proc    proc    defaults,hidepid=2,gid=wheel    0 0

# Or mount immediately
sudo mount -o remount,hidepid=2,gid=wheel /proc

Environment Variable Risks

Environment variables are convenient but dangerous. On many systems, /proc/[pid]/environ is readable by the process owner, and env vars get inherited by child processes, logged by crash reporters, and captured by monitoring tools. It's one of those things that seems fine until it really isn't.

# BAD: Secrets in the global environment, visible to all child processes
export DATABASE_URL="postgres://admin:[email protected]:5432/app"

# BETTER: Pass directly to a single process without export
DATABASE_URL="postgres://admin:[email protected]:5432/app" /usr/bin/myapp

# BEST: Use credential files with strict permissions
echo "postgres://admin:[email protected]:5432/app" > /etc/myapp/db-url
chmod 600 /etc/myapp/db-url
chown myapp:myapp /etc/myapp/db-url

The Linux Kernel Keyring

Here's something a lot of admins don't know about: the Linux kernel has a built-in credential store via the keyctl subsystem. Keys stored here live in memory that's never swapped to disk, and access can be scoped to a single process, session, user, or thread.

# Store a secret in the user session keyring
keyctl add user db-password "s3cret-password" @s

# Retrieve it
keyctl print $(keyctl search @s user db-password)

# List all keys in the session keyring
keyctl show @s

# Set a 1-hour timeout on the key
keyctl timeout $(keyctl search @s user db-password) 3600

# Revoke a key when no longer needed
keyctl revoke $(keyctl search @s user db-password)

The kernel keyring is ideal for short-lived secrets that a process needs at runtime. It avoids writing credentials to disk entirely and provides automatic cleanup when sessions end. Honestly, it's underrated.

Encrypting Secrets at Rest with SOPS and Age

SOPS (Secrets OPerationS) encrypts secret values in configuration files while keeping keys readable — so you can still git diff your configs and see which fields changed. Combined with Age (a modern, dead-simple encryption tool), you get a workflow where secrets live safely in your Git repository, encrypted with AES-256-GCM, and only decryptable by authorized key holders.

Installing SOPS and Age

# Install age
# Debian/Ubuntu
sudo apt-get install -y age

# RHEL/Fedora
sudo dnf install -y age

# Install SOPS (latest release)
SOPS_VERSION=$(curl -s https://api.github.com/repos/getsops/sops/releases/latest \
  | grep tag_name | sed 's/.*"v//;s/".*//' )
curl -LO "https://github.com/getsops/sops/releases/download/v${SOPS_VERSION}/sops-v${SOPS_VERSION}.linux.amd64"
sudo install -m 755 sops-v${SOPS_VERSION}.linux.amd64 /usr/local/bin/sops

Generating Age Keys

# Generate an Age key pair
age-keygen -o ~/.config/sops/age/keys.txt

# Output shows:
# Public key: age1qy3rz5mvad7dp7qcyp5hpnwhxlgp68n...
# The private key is stored in the file

# Restrict permissions
chmod 600 ~/.config/sops/age/keys.txt

# Export the key file path for SOPS
export SOPS_AGE_KEY_FILE=~/.config/sops/age/keys.txt

Configuring SOPS for Your Repository

Create a .sops.yaml in your repository root. This file controls which files get encrypted and with which keys — think of it as your encryption policy file.

# .sops.yaml
creation_rules:
  # Encrypt all secrets files with the ops team key
  - path_regex: .*secrets.*\.(yaml|json)$
    age: age1qy3rz5mvad7dp7qcyp5hpnwhxlgp68n6ycxjaa7n4dp6qfkyx9s3tvr4t

  # Production secrets - require both ops and security team keys
  - path_regex: environments/production/.*\.yaml$
    age: >-
      age1qy3rz5mvad7dp7qcyp5hpnwhxlgp68n6ycxjaa7n4dp6qfkyx9s3tvr4t,
      age1rl40t0ef0nh9ycsh35cpqkeyswdv565r04vgxqjngj0axn4dz6rqfz7mpx

  # Kubernetes secrets - only encrypt data fields
  - path_regex: k8s/.*\.enc\.yaml$
    age: age1qy3rz5mvad7dp7qcyp5hpnwhxlgp68n6ycxjaa7n4dp6qfkyx9s3tvr4t
    encrypted_regex: "^(data|stringData)$"

Encrypting and Editing Secrets

# Create a secrets file
cat > config/secrets.yaml << EOF
database:
  host: db.internal.prod
  port: 5432
  username: app_user
  password: super-secret-password-2026
redis:
  auth_token: redis-token-abc123xyz
api:
  stripe_key: sk_live_abc123def456
  sendgrid_key: SG.abc123.def456
EOF

# Encrypt in place
sops --encrypt --in-place config/secrets.yaml

# Now the file looks like this - keys visible, values encrypted:
# database:
#     host: ENC[AES256_GCM,data:aBcDeFg...,tag:...,type:str]
#     port: ENC[AES256_GCM,data:aBcD,tag:...,type:int]

# Edit interactively (decrypts, opens editor, re-encrypts on save)
sops config/secrets.yaml

# Decrypt to stdout (never redirect to an unprotected file)
sops --decrypt config/secrets.yaml

# Extract a single value programmatically
sops --decrypt --extract '["database"]["password"]' config/secrets.yaml

Key Rotation and Team Management

When a team member leaves or a key gets compromised, update .sops.yaml to remove their public key and run:

# Re-encrypt all secrets files with updated key list
find . -name "*.secrets.yaml" -exec sops updatekeys {} \;

# Verify the change
sops --decrypt config/secrets.yaml > /dev/null && echo "Re-encryption successful"

One thing worth mentioning: SOPS protects against tampering with a MAC (message authentication code). If anyone modifies the encrypted file directly, SOPS will refuse to decrypt it, which alerts you to potential interference. That's a nice safety net.

HashiCorp Vault for Centralized Dynamic Secrets

SOPS handles static secrets in Git brilliantly, but for dynamic credentials — database users that rotate automatically, short-lived cloud tokens, PKI certificates — you need something more. That's where HashiCorp Vault comes in. Vault v1.21 (the latest stable release as of early 2026) provides a centralized secrets engine with audit logging, fine-grained ACLs, and automatic credential rotation.

Installing and Initializing Vault on Linux

# Install from the HashiCorp repository (RHEL/Fedora)
sudo dnf install -y yum-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo dnf install -y vault

# For Debian/Ubuntu
curl -fsSL https://apt.releases.hashicorp.com/gpg \
  | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
  https://apt.releases.hashicorp.com $(lsb_release -cs) main" \
  | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update && sudo apt-get install -y vault

# Initialize Vault (production - uses Shamir secret sharing)
vault operator init -key-shares=5 -key-threshold=3

# Unseal with 3 of 5 keys
vault operator unseal <key-1>
vault operator unseal <key-2>
vault operator unseal <key-3>

Storing and Retrieving Static Secrets (KV Engine)

# Enable KV v2 secrets engine
vault secrets enable -path=secret kv-v2

# Store application credentials
vault kv put secret/myapp/production \
  db_password="rotating-secret-2026" \
  api_key="sk_live_production_key" \
  redis_auth="redis-cluster-token"

# Retrieve a specific field
vault kv get -field=db_password secret/myapp/production

# List all secrets at a path
vault kv list secret/myapp/

# Read secret metadata (versions, timestamps)
vault kv metadata get secret/myapp/production

Dynamic Database Credentials

This is where Vault really shines. The database secrets engine generates unique, time-limited database credentials on every request. When the lease expires, Vault automatically revokes them. No more shared passwords sitting in config files for months — each credential is unique and short-lived.

# Enable the database secrets engine
vault secrets enable database

# Configure a PostgreSQL connection
vault write database/config/production-postgres \
  plugin_name="postgresql-database-plugin" \
  allowed_roles="app-readonly,app-readwrite" \
  connection_url="postgresql://{{username}}:{{password}}@db.prod.internal:5432/appdb?sslmode=verify-full" \
  username="vault_admin" \
  password="vault-initial-password"

# Create a read-only role with 1-hour TTL
vault write database/roles/app-readonly \
  db_name="production-postgres" \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' \
    VALID UNTIL '{{expiration}}'; \
    GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
  default_ttl="1h" \
  max_ttl="24h"

# Request dynamic credentials (returns unique username/password)
vault read database/creds/app-readonly
# Key                Value
# lease_id           database/creds/app-readonly/abc123...
# lease_duration     1h
# username           v-token-app-read-xyz789
# password           A1b2C3d4-random-generated

Vault Agent for Automatic Token Renewal

In production, your applications shouldn't call the Vault API directly. Instead, run Vault Agent as a sidecar that handles authentication, token renewal, and secret templating. It's one less thing your app code needs to worry about.

# /etc/vault-agent/config.hcl
auto_auth {
  method "approle" {
    config = {
      role_id_file_path   = "/etc/vault-agent/role-id"
      secret_id_file_path = "/etc/vault-agent/secret-id"
      remove_secret_id_file_after_reading = true
    }
  }

  sink "file" {
    config = {
      path = "/run/vault-agent/token"
      mode = 0600
    }
  }
}

template {
  source      = "/etc/vault-agent/templates/db-creds.tpl"
  destination = "/run/secrets/db-creds.env"
  perms       = 0600
  command     = "systemctl reload myapp.service"
}

vault {
  address = "https://vault.internal:8200"
}

Secure Secret Injection with systemd-creds

For services managed by systemd (which is basically everything on modern Linux), systemd-creds provides encrypted credential storage tied to the local machine's TPM2 chip. Credentials are encrypted at rest, decrypted only at service startup, and automatically cleaned up when the service stops. It's elegant and surprisingly easy to set up.

Encrypting Credentials for a Service

# Encrypt a credential bound to the local TPM2 + host key
echo -n "super-secret-db-password" \
  | sudo systemd-creds encrypt --name=db-password - \
    /etc/credstore.encrypted/myapp.db-password

# Encrypt an API key
echo -n "sk_live_production_key" \
  | sudo systemd-creds encrypt --name=api-key - \
    /etc/credstore.encrypted/myapp.api-key

# Verify the encrypted credential
sudo systemd-creds decrypt /etc/credstore.encrypted/myapp.db-password -

Loading Credentials in a Service Unit

# /etc/systemd/system/myapp.service.d/credentials.conf
[Service]
LoadCredentialEncrypted=db-password:/etc/credstore.encrypted/myapp.db-password
LoadCredentialEncrypted=api-key:/etc/credstore.encrypted/myapp.api-key

Inside the running service, credentials show up as files under $CREDENTIALS_DIRECTORY:

# Application reads credentials at runtime
DB_PASSWORD=$(cat "$CREDENTIALS_DIRECTORY/db-password")
API_KEY=$(cat "$CREDENTIALS_DIRECTORY/api-key")

Combining systemd-creds with Vault

For environments that need both centralized management (Vault) and secure local injection (systemd-creds), there's systemd-vaultd. This daemon exposes a Unix socket that systemd's LoadCredential= connects to, fetching secrets from Vault on demand.

# Service unit using Vault-backed credentials
[Service]
ExecStart=/usr/local/bin/myapp
LoadCredential=db-password:/run/systemd-vaultd/sock
# systemd-vaultd fetches the secret from Vault via the socket

This combination gives you the audit trail and dynamic rotation of Vault with the process-level isolation of systemd credential injection. The service never sees a Vault token — it only receives the decrypted secret value through the credential directory. Best of both worlds, really.

Automated Secret Scanning: Gitleaks, TruffleHog, and detect-secrets

Even with perfect tooling, humans make mistakes. A developer pastes a token into a test file, a CI config hardcodes a registry password, or a credential ends up in a debug log. It happens. Secret scanning tools act as the safety net that catches these slips before they reach production — or worse, a public repository.

Gitleaks: Fast Pattern-Based Scanning

Gitleaks is a lightweight, open-source SAST tool that scans Git repos for hardcoded secrets using regex patterns. With 100+ built-in detectors, it's fast enough to run on every commit without slowing things down.

# Install Gitleaks from GitHub releases
GITLEAKS_VERSION=$(curl -s https://api.github.com/repos/gitleaks/gitleaks/releases/latest \
  | grep tag_name | sed 's/.*"v//;s/".*//' )
curl -LO "https://github.com/gitleaks/gitleaks/releases/download/v${GITLEAKS_VERSION}/gitleaks_${GITLEAKS_VERSION}_linux_x64.tar.gz"
tar -xzf gitleaks_${GITLEAKS_VERSION}_linux_x64.tar.gz
sudo install -m 755 gitleaks /usr/local/bin/

# Scan the entire repository history
gitleaks detect --source . --verbose

# Scan only staged changes (ideal for pre-commit hooks)
gitleaks protect --staged --verbose

# Generate a JSON report
gitleaks detect --source . --report-format json --report-path gitleaks-report.json

TruffleHog: Verified Secret Detection

TruffleHog goes beyond pattern matching — it actually verifies whether detected credentials are live and active. It classifies 800+ secret types and can scan Git repos, S3 buckets, Docker images, and even Slack workspaces.

# Install TruffleHog
curl -sSfL https://raw.githubusercontent.com/trufflesecurity/trufflehog/main/scripts/install.sh \
  | sh -s -- -b /usr/local/bin

# Scan a Git repository (including full history)
trufflehog git file://. --only-verified

# Scan a specific branch
trufflehog git file://. --branch main --only-verified

# Scan a filesystem path
trufflehog filesystem /opt/myapp/ --only-verified

The --only-verified flag is a game-changer. It reduces noise dramatically by only reporting credentials that TruffleHog has confirmed are actually valid. When TruffleHog alerts with this flag, you know it's real.

Setting Up Pre-Commit Hooks

The most effective place to catch secrets is before they enter the repository at all. So let's configure pre-commit hooks with multiple scanning tools for defense in depth:

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.21.0
    hooks:
      - id: gitleaks

  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.5.0
    hooks:
      - id: detect-secrets
        args: ['--baseline', '.secrets.baseline']

  - repo: https://github.com/trufflesecurity/trufflehog
    rev: v3.88.0
    hooks:
      - id: trufflehog
        args: ['--only-verified']
# Install and activate the hooks
pip install pre-commit
pre-commit install

# Run against all existing files
pre-commit run --all-files

# Generate a detect-secrets baseline (marks known false positives)
detect-secrets scan > .secrets.baseline
detect-secrets audit .secrets.baseline

CI/CD Pipeline Integration

Pre-commit hooks are a client-side control — developers can bypass them (intentionally or not). Always add server-side scanning in your CI/CD pipeline as a hard gate. This is your non-negotiable backstop.

# .github/workflows/secret-scan.yml
name: Secret Scanning
on: [push, pull_request]

jobs:
  gitleaks:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

  trufflehog:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - name: TruffleHog Scan
        uses: trufflesecurity/trufflehog@main
        with:
          extra_args: --only-verified

Secrets Management Best Practices Checklist

Alright, let's bring it all together. Here's a layered approach to Linux secrets management that covers the full lifecycle — from creation to rotation to retirement:

  • Never hardcode secrets — Use SOPS for static secrets in Git, Vault for dynamic secrets, and systemd-creds for service injection. Zero secrets in source code, config files, or environment exports.
  • Encrypt at rest and in transit — SOPS uses AES-256-GCM. Vault encrypts its storage backend. systemd-creds binds encryption to the local TPM2 chip. All Vault communication should go over TLS.
  • Rotate regularly — Use Vault's dynamic secrets engine to issue short-lived credentials (1-hour TTL for databases). Rotate SOPS Age keys quarterly. Rotate Vault unseal keys annually.
  • Enforce least privilege — Vault policies should grant the minimum access each application needs. File-based credentials should be chmod 600 owned by the service user. Use separate credentials per service — never share them.
  • Audit everything — Enable Vault's audit device to log every secret access. Use auditd rules to monitor access to credential files. Set up alerts for unexpected secret reads.
  • Scan continuously — Run Gitleaks in pre-commit hooks. Run TruffleHog in CI/CD pipelines. Perform periodic full-history scans of all repositories.
  • Plan for compromise — Have a documented incident response procedure for leaked secrets. Know exactly how to rotate each type of credential in your infrastructure. Practice the runbook (seriously, run the drill).

Choosing the Right Tool for the Job

The secrets management landscape has a lot of options, and picking the right one depends on your specific needs. Here's a quick decision framework:

Scenario Recommended Tool Why
Static secrets in a Git repo SOPS + Age Simple, no infrastructure required, encrypted at rest
Dynamic database credentials HashiCorp Vault Auto-rotation, TTL-based leases, audit logging
Injecting secrets into systemd services systemd-creds TPM2-bound encryption, zero-disk exposure at runtime
Kubernetes secrets in GitOps SOPS + Flux or Sealed Secrets Encrypted in Git, decrypted in-cluster
Preventing secret leaks in code Gitleaks + TruffleHog Pre-commit and CI/CD scanning with verification
Short-lived runtime secrets Linux kernel keyring In-memory, never swapped to disk, session-scoped

In practice, most production environments combine several of these tools. A common (and solid) stack is SOPS for application config secrets in Git, Vault for database and cloud credentials, systemd-creds for service-level injection, and Gitleaks in the CI pipeline as a safety net. You don't have to adopt everything at once — start with the layer that addresses your biggest risk and build from there.

Frequently Asked Questions

What is the safest way to store secrets on a Linux server?

The safest approach is layered: use a dedicated secrets manager like HashiCorp Vault for centralized control and audit logging, encrypt any secrets stored on disk using SOPS with Age or systemd-creds with TPM2 binding, restrict file permissions to chmod 600 owned by the service account, and never store secrets in environment variables, shell history, or source code. For runtime secrets that don't need persistence, the Linux kernel keyring provides in-memory storage that's never written to swap.

How do SOPS and HashiCorp Vault differ, and when should I use each?

SOPS encrypts files at rest and is ideal for static secrets (API keys, certificates, passwords) that live in Git repositories. It requires no running infrastructure — just an Age key pair. Vault is a running service that manages dynamic secrets (auto-rotating database credentials, short-lived tokens) with fine-grained ACLs and complete audit trails. Use SOPS for small teams and GitOps workflows. Use Vault when you need automatic credential rotation, dynamic secrets, or regulatory compliance with access logging. Many teams end up using both together, and that's perfectly fine.

How do I prevent secrets from accidentally being committed to Git?

Implement multiple layers: install pre-commit hooks using Gitleaks and detect-secrets to block commits containing credentials, add server-side scanning with TruffleHog in your CI/CD pipeline as a hard gate, maintain a .gitignore that excludes .env files, credential directories, and key files, and encrypt any secrets that must live in the repository using SOPS. Regularly run full-history scans with gitleaks detect --source . to catch any previously committed secrets that might be lurking in older commits.

What is systemd-creds and why should I use it instead of environment variables?

systemd-creds is a credential management system built into systemd (v250+) that encrypts secrets at rest using the local TPM2 chip and a host-specific key. Unlike environment variables — which can leak through /proc, child process inheritance, crash reports, and logging — systemd credentials are decrypted only at service startup, delivered to the process through a dedicated $CREDENTIALS_DIRECTORY, and automatically cleaned up when the service stops. They can't be read by other processes on the system, which makes them a strictly better option for service secrets.

How often should I rotate secrets in a production Linux environment?

Rotation frequency depends on the secret type: database credentials managed by Vault should rotate every 1 to 24 hours using dynamic secrets with short TTLs. API keys and tokens should rotate every 30 to 90 days, or immediately after any suspected compromise. SOPS Age encryption keys should rotate quarterly, with immediate rotation if a team member leaves. TLS certificates should rotate before expiry, ideally automated through Vault's PKI engine or ACME. The overarching goal is to make every credential short-lived enough that a stolen secret expires before anyone can exploit it.

About the Author Editorial Team

Our team of expert writers and editors.