Profile Applicability:

  • Level 2

Description:
 Kubernetes allows secrets to be exposed either as environment variables or as files within a container. It is considered more secure to mount secrets as files rather than environment variables because environment variables are more easily exposed in logs, process listings, and other system outputs. This check ensures that secrets are exposed as files in containers instead of environment variables.

Rationale:
 Exposing secrets through environment variables increases the risk of accidental exposure, as they can be viewed by anyone who has access to the container's environment, including users who might access logs, process lists, or shell history. Mounting secrets as files inside containers offers better security, as they are less likely to be exposed unintentionally.

Impact:

  • Pros:

    • Reduces the risk of exposing sensitive information through logs or process listings.

    • Provides better security by limiting access to secrets to only those containers that need them.

  • Cons:

    • Additional configuration is required to mount secrets as files in containers.

    • Might require updates to existing application code that accesses secrets through environment variables.

Default Value:
 By default, Kubernetes allows secrets to be exposed as either environment variables or files. Secrets as environment variables are often used due to simplicity, but this approach is less secure than mounting them as files.

Pre-requisites:
 Ensure that the application is capable of reading secrets from files, and that secrets are properly mounted inside the containers.

Remediation

Test Plan:

Using Azure Console:

  1. Navigate to the Azure portal and access the Kubernetes cluster.

  2. Review the Secret configuration for containers in your cluster.

  3. Ensure that secrets are not being exposed as environment variables but are instead mounted as files using the appropriate configurations.

Using Azure CLI:

  1. List all namespaces and check for existing secrets:

     kubectl get namespaces

  2. Verify that each namespace has associated Network Policies defined by running the following command:

    kubectl get networkpolicies --all-namespaces

  3. Ensure that the result shows Network Policies for every namespace.

Implementation Plan:

Using Azure Console:

  1. Access the Azure portal and go to your AKS cluster.

  2. For each namespace, go to the Network Policies section and define a policy that restricts traffic as needed.

  3. Ensure that the policies are applied to the appropriate namespaces by verifying the settings in the Azure portal.

Using Azure CLI:

  1. To create a new Network Policy, use the following command:

     kubectl apply -f network-policy.yaml --namespace=<namespace>

  2. Ensure the YAML file defines appropriate ingress and egress rules to restrict pod communication within the namespace. Here is an example of a basic Network Policy:

     apiVersion: networking.k8s.io/v1
     kind: NetworkPolicy
     metadata:
     name: restrict-traffic
     spec:
     podSelector: {}
     ingress:
     - from:
     - podSelector: {}
     egress:
     - to:
     - podSelector: {}


Backout Plan:

Using Azure Console:

  1. If the Network Policies cause issues with application functionality, revert the changes in the Azure portal by deleting or modifying the policies.

Using Azure CLI:

  1. Revert any changes by deleting the applied Network Policies using the following command:

     kubectl delete networkpolicy <policy-name> --namespace=<namespace>


References:

  1. Azure Kubernetes Service (AKS) Network Policies Documentation

  2. Kubernetes Network Policy Documentation