Introduction: Containers Are Everywhere — And So Are Their Vulnerabilities
If you've been working with Linux infrastructure for any length of time, you've probably noticed that containers went from "that cool new thing" to "the way we do everything" practically overnight. Whether you're running microservices on Kubernetes, CI/CD pipelines in Docker, or edge workloads with Podman, containers are the default packaging and execution model now. But that ubiquity comes with a serious security cost that a lot of teams are still catching up to.
The numbers tell a sobering story. Supply chain attacks nearly doubled in 2025, with researchers recording 297 supply chain attacks claimed by threat groups — up 93% from 154 in 2024. In November 2025, three high-severity vulnerabilities in runc (CVE-2025-31133, CVE-2025-52565, CVE-2025-52881) exposed every Docker and Kubernetes deployment to container escape attacks. Chainguard's January 2026 analysis found that 98% of CVE instances in container images lurk outside the top 20 most popular images, meaning the long tail of container dependencies is essentially unmonitored.
And then there was the September 2025 npm supply chain compromise — attackers hijacked 18 widely used packages downloaded over 2.6 billion times per week. That one really drove the point home: even the most trusted software components can become attack vectors overnight.
So, if you're running containers on Linux, security can't be an afterthought. It needs to be baked into every layer: from how you build images to how you configure runtimes to how you monitor running workloads.
This guide walks you through practical, production-tested container hardening for Linux systems. We'll cover image security, supply chain verification, runtime hardening with seccomp and AppArmor, rootless container deployments, Kubernetes Pod Security Standards, vulnerability scanning, and runtime monitoring. Everything here is current as of early 2026 and focused on what actually works in real-world environments.
Understanding the Container Threat Landscape in 2025-2026
Container Escape: The Most Dangerous Attack Vector
A container escape is exactly what it sounds like — an attacker breaks out of a container's isolation boundary and gains access to the host system. Once they're on the host, they can compromise other containers, access sensitive data, and move laterally across your infrastructure. It's basically game over.
The three runc vulnerabilities disclosed in November 2025 demonstrate exactly how this plays out in practice.
CVE-2025-31133 exploited a flaw in how runc uses /dev/null to mask sensitive host files. Because runc didn't properly verify that /dev/null was legitimate, attackers could swap it with a symlink during container initialization, allowing arbitrary host paths to be bind-mounted into the container. CVE-2025-52565 allowed attackers to gain write access to protected procfs files during initialization, and CVE-2025-52881 enabled misdirected writes to sensitive system files in /proc. Here's the really scary part: the first and third vulnerabilities are considered universal — they affect all runc versions ever released.
The fix? Update to runc 1.2.8, 1.3.3, or 1.4.0-rc.3 and later. But more importantly, enable user namespaces for all containers. This blocks the most serious attack vectors because user namespace processes lack access to the procfs files required for exploitation.
Supply Chain Attacks: The Invisible Threat
Container images pull in hundreds or thousands of dependencies, and every single one of them is a potential attack vector. The 2025 supply chain landscape was particularly brutal:
- npm compromise (September 2025) — Attackers phished a maintainer and compromised 18 packages including chalk, debug, and ansi-styles, injecting malicious code into packages downloaded billions of times weekly.
- GitHub Actions supply chain attack (January 2025) — The tj-actions/changed-files action was compromised via CVE-2025-30066, leaking CI/CD secrets in public build logs.
- Docker Desktop CVE-2025-9074 — Required authentication for Docker Engine API access from containers after discovering that supply chain attacks could inject malicious containers that achieved immediate Engine access.
These attacks share a common pattern: they exploit trust relationships in the software supply chain. Your Dockerfile might be perfectly secure, but if a base image or dependency gets compromised upstream, your container inherits that compromise. It's a frustrating reality, but one you can actually defend against (more on that below).
Misconfiguration: The Most Common Vulnerability
For all the attention paid to sophisticated exploits, honestly, most container compromises stem from basic misconfigurations. Running as root, exposing the Docker socket, using images with known vulnerabilities, missing resource limits, overly permissive network policies — the boring stuff. And with AI-powered attack tools in 2026 able to autonomously identify misconfigurations and exploit paths across entire clusters, even simple oversights have become genuinely dangerous.
Securing Container Images: Build-Time Hardening
Use Minimal Base Images
The most effective way to reduce your container attack surface is to start with as little as possible. Every binary, library, and tool in your base image is a potential vulnerability waiting to happen.
Distroless images from Google or Chainguard Images strip out everything except the application runtime — no shell, no package manager, no debugging tools. Switching from a standard Ubuntu base to a distroless image routinely reduces image size from 800 MB to 15-30 MB and eliminates entire classes of vulnerabilities. That's not a typo — we're talking about a 95%+ reduction.
Here's a practical multi-stage Dockerfile for a Go application that produces a minimal, hardened image:
# Stage 1: Build
FROM golang:1.23-bookworm AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags="-w -s" -o /app/server ./cmd/server
# Stage 2: Runtime — distroless, non-root
FROM gcr.io/distroless/static-debian12:nonroot
COPY --from=builder /app/server /server
# Run as non-root user (uid 65534)
USER nonroot:nonroot
ENTRYPOINT ["/server"]
For applications that need a minimal but usable base image (say, for debugging in staging environments), consider Wolfi-based images from Chainguard. They provide daily vulnerability patching and a minimal Alpine-like environment without the musl libc compatibility headaches that have bitten so many teams.
Multi-Stage Builds for Security
Multi-stage builds aren't just about image size — they're actually a security control. By separating build-time dependencies from runtime, you ensure that compilers, build tools, and development libraries never make it into your production image. An attacker who compromises a running container can't use gcc to compile exploits or curl to download additional payloads if those tools simply don't exist in the image.
For languages that produce static binaries (Go, Rust), you can use scratch as the final stage for the absolute minimum attack surface:
# Final stage with zero OS — just the binary
FROM scratch
COPY --from=builder /app/server /server
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
USER 65534:65534
ENTRYPOINT ["/server"]
Never Bake Secrets into Images
This seems obvious, but it's still one of the most common mistakes I see in the wild. Environment variables set with ENV, files copied with COPY, and anything done in a RUN statement gets permanently stored in the image layers. Even if you delete a secret in a later layer, it's still extractable from the earlier layer. Layers are forever.
Use BuildKit secret mounts instead:
# syntax=docker/dockerfile:1
FROM node:22-slim AS builder
WORKDIR /app
COPY package*.json ./
# Mount the npm token as a secret — it never appears in any layer
RUN --mount=type=secret,id=npm_token \
NPM_TOKEN=$(cat /run/secrets/npm_token) \
npm ci --registry=https://registry.npmjs.org/
COPY . .
RUN npm run build
Build with:
DOCKER_BUILDKIT=1 docker build \
--secret id=npm_token,src=$HOME/.npm_token \
-t myapp:latest .
Supply Chain Security: Signing, Verification, and SBOMs
Sign Your Images with Cosign and Sigstore
Image signing creates a cryptographic proof that an image was built by an authorized party and hasn't been tampered with since. Cosign, part of the Sigstore project, has become the standard tool for container image signing in the Linux ecosystem — and for good reason.
The simplest approach uses keyless signing with Sigstore's Fulcio certificate authority and Rekor transparency log:
# Install cosign
go install github.com/sigstore/cosign/v2/cmd/cosign@latest
# Sign an image (keyless — uses OIDC identity)
cosign sign myregistry.io/myapp:v1.2.3
# Verify an image
cosign verify \
[email protected] \
--certificate-oidc-issuer=https://accounts.google.com \
myregistry.io/myapp:v1.2.3
For CI/CD pipelines, use workload identity (e.g., GitHub Actions OIDC tokens) so that signing happens automatically without managing keys:
# In a GitHub Actions workflow
- name: Sign container image
run: cosign sign --yes ${{ env.REGISTRY }}/${{ env.IMAGE }}:${{ env.TAG }}
env:
COSIGN_EXPERIMENTAL: "true"
Signing events are written to Rekor, an append-only transparency log, creating an auditable trail of every image that was signed, when, and by whom. It's like a tamper-proof receipt for your container builds.
Generate and Attach SBOMs
A Software Bill of Materials (SBOM) is basically an ingredient list for your container image — an inventory of every component inside it. When a new vulnerability is disclosed, an SBOM lets you instantly determine whether any of your images are affected without scanning every image from scratch.
# Generate an SBOM using Syft
syft myregistry.io/myapp:v1.2.3 -o spdx-json > sbom.spdx.json
# Attach the SBOM to the image in the registry
cosign attach sbom --sbom sbom.spdx.json myregistry.io/myapp:v1.2.3
# Sign the SBOM attestation
cosign attest --predicate sbom.spdx.json \
--type spdxjson myregistry.io/myapp:v1.2.3
Anyone pulling your image can now verify both the image signature and the SBOM attestation, confirming that the image is authentic and the SBOM accurately represents its contents.
Enforce Image Policies in Kubernetes
Signing images is only half the equation — you also need to enforce that only signed images can run in your cluster. Sigstore's policy-controller (or Kyverno with Sigstore integration) can reject any pod that references an unsigned or unverified image:
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: require-signed-images
spec:
images:
- glob: "myregistry.io/**"
authorities:
- keyless:
identities:
- issuer: https://accounts.google.com
subject: "[email protected]"
ctlog:
url: https://rekor.sigstore.dev
Runtime Hardening: Locking Down Running Containers
Drop All Capabilities, Add Only What's Needed
Linux capabilities split the monolithic root privilege into granular units. By default, Docker containers run with a subset of capabilities that's still far too permissive for most workloads. The correct approach is to drop everything and add back only what your application actually requires:
# Docker run with minimal capabilities
docker run --rm \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--security-opt=no-new-privileges:true \
myapp:latest
In a Docker Compose file:
services:
web:
image: myapp:latest
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp:noexec,nosuid,size=64m
user: "65534:65534"
The no-new-privileges flag is especially important — it prevents processes inside the container from gaining additional privileges through setuid binaries or capability-raising operations, even if those binaries exist in the image. Don't skip this one.
Apply Seccomp Profiles
Seccomp (Secure Computing Mode) restricts which system calls a container can make. Docker ships with a default seccomp profile that blocks about 44 of the 300+ Linux syscalls, but you can (and should) create a custom profile tailored to your application's actual needs.
First, generate a profile by tracing your application's syscall usage with strace or OCI runtime hooks:
# Trace syscalls used by your application
strace -c -f -o /tmp/syscalls.log your-application
# Or use the oci-seccomp-bpf-hook to automatically generate a profile
sudo docker run --rm \
--annotation io.containers.trace-syscall="of:/tmp/profile.json" \
myapp:latest
Then apply the custom profile:
docker run --rm \
--security-opt seccomp=/path/to/custom-seccomp.json \
myapp:latest
A tightly scoped seccomp profile blocks entire categories of exploits. Blocking ptrace prevents process injection attacks. Blocking mount prevents filesystem escape attempts. Blocking clone with CLONE_NEWUSER prevents user namespace manipulation. Each blocked syscall is one less tool in an attacker's toolkit.
AppArmor and SELinux Confinement
AppArmor and SELinux provide mandatory access control (MAC) that limits what a container process can do at the filesystem, network, and capability level — even if it runs as root. Think of them as a safety net that catches anything the other controls miss.
On Ubuntu/Debian systems, create a custom AppArmor profile for your container:
# /etc/apparmor.d/containers/myapp-profile
#include <tunables/global>
profile myapp-container flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
# Allow read access to application files
/app/** r,
/app/server ix,
# Allow tmp access
/tmp/** rwk,
# Deny write to sensitive paths
deny /proc/sys/** w,
deny /sys/** w,
deny /etc/shadow r,
deny /etc/passwd w,
# Network access
network inet tcp,
network inet udp,
network inet6 tcp,
# Deny raw sockets
deny network raw,
deny network packet,
# Deny mount operations
deny mount,
deny umount,
deny pivot_root,
}
Load and apply it:
# Load the profile
sudo apparmor_parser -r /etc/apparmor.d/containers/myapp-profile
# Run with the custom profile
docker run --rm \
--security-opt apparmor=myapp-container \
myapp:latest
On RHEL/Fedora systems with SELinux, containers get the container_t SELinux type by default, which provides reasonable confinement. For stricter isolation, use custom SELinux policies or udica to generate container-specific policies automatically:
# Generate an SELinux policy from a running container's inspection
sudo podman inspect mycontainer | sudo udica myapp-policy
# Load the policy
sudo semodule -i myapp-policy.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}
# Run with the custom policy
podman run --rm --security-opt label=type:myapp-policy.process myapp:latest
Read-Only Root Filesystem
Running containers with a read-only root filesystem prevents attackers from modifying binaries, installing malware, or persisting changes inside the container. This is honestly one of the simplest and most effective hardening measures available — and it's underused:
docker run --rm \
--read-only \
--tmpfs /tmp:noexec,nosuid,size=64m \
--tmpfs /var/run:noexec,nosuid,size=16m \
myapp:latest
Most well-designed applications work fine with a read-only root filesystem. If yours doesn't, that's actually useful feedback — it tells you which directories your application writes to, and you can mount those specific paths as writable tmpfs volumes with appropriate size limits and mount options.
Rootless Containers: Eliminating the Root Daemon Attack Surface
Why Rootless Matters
Traditional Docker runs a daemon as root, creating a centralized attack surface that, if compromised, grants full host privileges. That's a pretty terrifying single point of failure when you think about it.
Rootless containers eliminate this risk by running the entire container lifecycle — including the runtime daemon — as an unprivileged user. Even if an attacker achieves container escape, they're trapped within the user namespace and can't escalate to host root.
Podman: Rootless by Default
Podman was designed from the ground up for rootless operation. It uses a daemonless architecture where each container runs as a direct child process of the user, with no persistent root-owned daemon involved. This is one of the reasons I'm a big fan of Podman for security-conscious environments:
# Podman runs rootless out of the box — no configuration needed
podman run --rm -d \
--name webapp \
-p 8080:8080 \
--cap-drop=ALL \
--security-opt=no-new-privileges:true \
--read-only \
myapp:latest
# Verify it's running rootless
podman info --format '{{.Host.Security.Rootless}}'
# Output: true
# Check user namespace mapping
podman unshare cat /proc/self/uid_map
# Output shows the UID mapping from container to host
Podman integrates natively with SELinux on RHEL/Fedora systems and automatically applies the container_t SELinux context to containerized processes, providing an additional layer of mandatory access control that operates independently of user namespaces.
Docker Rootless Mode
Docker added rootless support in version 19.03, though it's still not the default (which is a bit surprising at this point). To set it up:
# Install prerequisites
sudo apt-get install -y uidmap dbus-user-session
# Install Docker rootless
dockerd-rootless-setuptool.sh install
# Configure your shell
export PATH=/home/$USER/bin:$PATH
export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock
# Verify rootless operation
docker info --format '{{.SecurityOptions}}'
# Should include "rootless"
One important limitation to be aware of: rootless containers on Linux use slirp4netns or pasta for networking instead of the kernel's network namespaces, which means some networking features behave differently. Binding to privileged ports (below 1024) requires either sysctl configuration or using pasta as the network driver:
# Allow unprivileged users to bind to port 80
sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80
# Or use pasta for better networking (rootless Docker)
cat ~/.config/docker/daemon.json
{
"features": {
"rootless": true
}
}
Security Comparison: Rootful vs. Rootless
The security implications are substantial. In a rootful Docker setup, compromising the Docker socket gives an attacker full root access to the host — it's essentially equivalent to granting root SSH access. In a rootless setup, even total compromise of the container runtime only gives the attacker the privileges of the unprivileged user running the daemon. Combined with user namespace mapping, the blast radius is dramatically reduced.
For CI/CD environments and developer workstations, rootless containers are a particularly strong recommendation. These environments frequently run untrusted code — build steps, test suites, third-party dependencies — and the reduced blast radius of rootless execution provides meaningful protection against supply chain attacks that target build pipelines.
Kubernetes Pod Security Standards
The Three Security Profiles
Kubernetes Pod Security Admission (PSA), which replaced the deprecated PodSecurityPolicy in Kubernetes 1.25, defines three security profiles you can enforce at the namespace level:
- Privileged — Unrestricted. No security constraints applied. Use this only for system-level workloads that genuinely need full host access (and isolate them in dedicated namespaces).
- Baseline — Prevents known privilege escalations. Blocks hostNetwork, hostPID, hostIPC, privileged containers, and most dangerous capabilities while remaining compatible with most workloads.
- Restricted — Maximum security. Requires running as non-root, dropping all capabilities, defining a seccomp profile, using only approved volume types, and preventing privilege escalation. This is what your production workloads should target.
Enforcing the Restricted Profile
Apply Pod Security Standards using namespace labels:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
# Enforce restricted profile — reject non-compliant pods
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
# Warn about baseline violations in staging
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
# Audit all violations for logging
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latest
A pod spec that complies with the restricted profile looks like this:
apiVersion: v1
kind: Pod
metadata:
name: secure-app
namespace: production
spec:
securityContext:
runAsNonRoot: true
runAsUser: 65534
runAsGroup: 65534
fsGroup: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myregistry.io/myapp:v1.2.3@sha256:abc123...
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
limits:
cpu: "500m"
memory: "256Mi"
requests:
cpu: "100m"
memory: "128Mi"
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir:
sizeLimit: 64Mi
Notice the image reference includes a digest (@sha256:...) rather than just a tag. This matters. Tags are mutable — an attacker who compromises your registry can replace a tagged image with a malicious one. Digest references are immutable, guaranteeing you're running exactly the image you expect.
Network Policies: Default Deny
Here's something that catches a lot of people off guard: Kubernetes allows all pod-to-pod communication by default. That's a massive lateral movement risk. Apply a default-deny network policy to every namespace, then explicitly allow only the traffic your application needs:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-webapp-traffic
namespace: production
spec:
podSelector:
matchLabels:
app: webapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: ingress-controller
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS resolution
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
Vulnerability Scanning: Trivy and Grype in Practice
Scanning Images with Trivy
Trivy, developed by Aqua Security, has become the go-to open-source vulnerability scanner for container images. It scans OS packages, language-specific dependencies, IaC files, and even secrets — all in a single pass. It's genuinely impressive how much ground it covers:
# Scan an image for vulnerabilities
trivy image myapp:latest
# Scan with severity filter and exit code for CI/CD
trivy image --severity HIGH,CRITICAL --exit-code 1 myapp:latest
# Generate an SBOM in CycloneDX format
trivy image --format cyclonedx --output sbom.json myapp:latest
# Scan a Dockerfile for misconfigurations
trivy config Dockerfile
# Scan a running Kubernetes cluster
trivy k8s --report summary cluster
For CI/CD integration, embed Trivy as a build gate that fails the pipeline if critical vulnerabilities are found:
# GitHub Actions example
- name: Scan container image
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE }}:${{ env.TAG }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1'
- name: Upload scan results
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: 'trivy-results.sarif'
Scanning with Grype
Grype, from Anchore, takes a more focused approach. Where Trivy is an all-in-one scanner, Grype concentrates specifically on vulnerability detection with an emphasis on accuracy and low false-positive rates. It pairs nicely with Syft for SBOM generation:
# Generate an SBOM with Syft
syft myapp:latest -o json > sbom.json
# Scan the SBOM with Grype
grype sbom:sbom.json --fail-on critical
# Or scan an image directly
grype myapp:latest --only-fixed --output table
# Use VEX to suppress acknowledged false positives
grype myapp:latest --vex ./vex-statements.json
Grype's incremental database updates are significantly smaller than Trivy's, making it more efficient in environments with limited bandwidth or when running frequent scans throughout the day.
Which Scanner Should You Choose?
Use Trivy if you want a single tool that covers vulnerabilities, misconfigurations, secrets, and SBOM generation — it's the best all-in-one choice for teams that want simplicity. Use Grype if you need maximum accuracy in vulnerability detection with minimal false positives, especially if you're already using Syft for SBOM generation.
Many teams actually run both: Trivy in CI/CD for broad coverage and Grype for targeted pre-deployment verification. There's no rule that says you have to pick just one.
Runtime Monitoring and Threat Detection
Falco for Runtime Security
Vulnerability scanning catches known issues in images. But what about unknown threats in running containers? That's where runtime monitoring comes in.
Falco, a CNCF graduated project, uses eBPF (or a kernel module) to monitor system calls from containers and trigger alerts when anomalous behavior is detected. It's kind of like an intrusion detection system specifically built for containers.
Deploy Falco on your Kubernetes cluster:
# Install Falco with Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco \
--namespace falco --create-namespace \
--set falcosidekick.enabled=true \
--set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/..." \
--set driver.kind=modern_ebpf
Falco ships with comprehensive default rules that detect common container attacks. Here's a custom rule that detects when a container process attempts to read sensitive files:
- rule: Sensitive File Access in Container
desc: Detect attempts to read sensitive files within containers
condition: >
container and
open_read and
(fd.name startswith /etc/shadow or
fd.name startswith /etc/kubernetes or
fd.name startswith /var/run/secrets) and
not proc.name in (kubelet, kube-proxy)
output: >
Sensitive file accessed in container
(file=%fd.name user=%user.name container=%container.name
image=%container.image.repository command=%proc.cmdline)
priority: WARNING
tags: [container, filesystem, mitre_credential_access]
Container-Aware Audit Logging
Don't forget about Kubernetes audit logging — it captures API server actions that affect container security. A well-configured audit policy records who created, modified, or deleted pods, secrets, roles, and other security-relevant resources. You'll be glad you had these logs when (not if) you need to investigate an incident:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all changes to pods and workloads at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods", "pods/exec", "pods/portforward"]
- group: "apps"
resources: ["deployments", "daemonsets", "statefulsets"]
verbs: ["create", "update", "patch", "delete"]
# Log secret access at Metadata level (don't log secret values)
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
# Log RBAC changes
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
Putting It All Together: A Container Security Checklist
Container security isn't any single technique — it's layers of defense that work together. Here's a practical checklist you can use to assess and improve your container security posture:
Build Phase:
- Use minimal base images (distroless, Chainguard, or scratch)
- Implement multi-stage builds to exclude build tools from runtime images
- Never store secrets in image layers — use BuildKit secret mounts
- Pin base image versions with digest references, not mutable tags
- Scan images for vulnerabilities in CI/CD with Trivy or Grype
- Generate and attach SBOMs to every image
- Sign images with Cosign before pushing to a registry
- Rebuild images regularly to pick up base image security patches
Deploy Phase:
- Enforce image signature verification in your cluster
- Apply Pod Security Standards at the restricted level
- Reference images by digest, not just tags
- Implement default-deny network policies in every namespace
- Set resource limits on all containers to prevent DoS
Runtime Phase:
- Run containers as non-root with a read-only root filesystem
- Drop all Linux capabilities, add back only what's needed
- Apply seccomp profiles to restrict system call access
- Use AppArmor or SELinux for mandatory access control
- Consider rootless container runtimes (Podman or Docker rootless)
- Enable user namespaces to mitigate container escape vulnerabilities
- Deploy runtime monitoring with Falco or similar tools
- Enable and review audit logging for security-relevant events
Ongoing:
- Keep runc, containerd, Docker, and Podman updated — container escape CVEs are regular occurrences
- Rescan existing images when new CVEs are published
- Review and tighten seccomp and AppArmor profiles as your application evolves
- Audit RBAC permissions and service account privileges regularly
- Practice incident response scenarios for container compromise events
Conclusion: Defense in Depth for the Container Era
Container security in 2026 demands a defense-in-depth approach that spans the entire lifecycle — from build to deploy to runtime. The runc escape vulnerabilities, supply chain attacks, and AI-assisted exploitation tools we've seen over the past year make it clear that no single security control is sufficient.
The good news? The tooling has matured significantly. Cosign and Sigstore make supply chain verification practical. Trivy and Grype provide excellent vulnerability scanning. Podman and Docker rootless mode eliminate the root daemon attack surface. Kubernetes Pod Security Standards provide built-in policy enforcement. And runtime monitoring tools like Falco catch threats that static analysis simply can't.
My recommendation: start with the highest-impact changes. Switch to rootless containers, enforce the restricted Pod Security Standard, scan all images in CI/CD, and deploy runtime monitoring. Then progressively tighten your security posture with custom seccomp profiles, AppArmor policies, image signing, and SBOM-based vulnerability tracking. Each layer you add makes your container infrastructure meaningfully harder to compromise — and at the end of the day, that's what defense in depth is all about.