Profile Applicability:
• Level 1
Description:
Containers should not be allowed to run with the hostIPC flag set to true, unless absolutely necessary. Containers that share the host IPC namespace can interact with processes outside the container, which can expose critical information and increase the potential attack surface.
Rationale:
Allowing containers to share the host's IPC namespace can lead to security risks, such as unauthorized access to inter-process communication between processes running on the host and other containers. Admission control policies should prevent the use of the hostIPC flag, with exceptions made only for containers that explicitly require it for valid use cases.
Impact:
Pros:
Reduces the risk of privilege escalation by restricting the ability of containers to interact with processes on the host.
Strengthens the security posture of the Kubernetes cluster by preventing unauthorized containers from accessing sensitive host processes.
Cons:
Some use cases, such as monitoring tools or system-level containers, may require the hostIPC flag. These cases should be carefully controlled by assigning them to specific policies and service accounts.
Default Value:
By default, Kubernetes does not restrict the creation of containers with the hostIPC flag set to true.
Pre-requisites:
Access to the Kubernetes cluster with sufficient privileges to define and enforce Pod Security Policies (PSPs) or other admission control policies.
Understanding of the workloads that require access to the host IPC namespace.
Remediation
Test Plan:
Using AWS Console:
Review the pod security policies (PSPs) applied to each namespace to ensure that hostIPC containers are not admitted without a specific policy.
Check the current pod configurations for the hostIPC flag to verify that no unauthorized containers are sharing the host IPC namespace.
Using AWS CLI:
To identify pods running with the hostIPC flag set to true, run the following command:
kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(.spec.hostIPC == true) | "\(.metadata.namespace)/\(.metadata.name)"'
Alternatively, you can search for hostIPC: true in specific namespaces, excluding kube-system:
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.metadata.namespace != "kube-system" and .spec.hostIPC == true) | {pod: .metadata.name, namespace: .metadata.namespace, container: .spec.containers[].name}'
Review the output to ensure no unauthorized pods are configured to share the host IPC namespace.
Implementation Plan
Using AWS Console:
Add Pod Security Admission (PSA) policies to restrict the admission of hostIPC containers in each namespace with user workloads.
Label the namespaces to enforce the restriction of hostIPC containers. For example:
kubectl label --overwrite ns NAMESPACE podsecurity.kubernetes.io/enforce=restricted
If needed, create separate policies that allow containers requiring hostIPC to run with proper access control. Ensure that only necessary service accounts or users are granted this ability.
Using AWS CLI:
Enforce the restricted policy across all namespaces or specific namespaces:
kubectl label --overwrite ns NAMESPACE podsecurity.kubernetes.io/enforce=restricted
Optionally, you can label all namespaces to enforce the restricted policy:
kubectl label --overwrite ns --all podsecurity.kubernetes.io/enforce=restricted
Ensure that only specific service accounts or users are granted the ability to run containers with hostIPC: true.
Backout Plan
Using AWS Console:
If the new policy causes issues, remove the enforced policy by resetting the namespace labels:
If containers must use hostIPC, restore the previous settings for the affected namespaces or workloads that need this capability.
Using AWS CLI:
To revert the policy, remove the enforced label for namespaces:
kubectl label --overwrite ns NAMESPACE podsecurity.kubernetes.io/enforce=""
Allow hostIPC containers only where necessary, and restore the previous configuration for any critical workloads.