Profile Applicability:
 • Level 1

Description:
Disable public IP addresses for cluster nodes, so that they only have private IP addresses. Private Nodes are nodes with no public IP addresses.

Rationale:
 Disabling public IP addresses on cluster nodes restricts access to only internal networks, forcing attackers to obtain local network access before attempting to compromise the underlying Kubernetes hosts. This reduces the potential attack surface of your Kubernetes environment.

Impact:

  • To enable Private Nodes, the cluster has to also be configured with a private master IP range and IP Aliasing enabled.

  • Private Nodes do not have outbound access to the public internet. If outbound internet access is required for private nodes, you can use Cloud NAT or manage your own NAT gateway.

Default Value:
 By default, Amazon EKS clusters allow both public and private access to the cluster's API.

Pre-requisites:
 Ensure that the cluster is configured with private subnets and no public IP addresses for the worker nodes. The private subnets should not be associated with a route table that has a route to an Internet Gateway (IGW). 

Remediation:

Test Plan:

Using AWS Console:

  1. Go to the EKS Console and review the cluster's VPC and Subnet configurations to ensure that nodes are using private IPs and are not associated with a public route.

  2. Confirm that the private endpoint is enabled and public access is disabled.

Using AWS CLI:
1.  To update your cluster configuration to use private nodes, run the following AWS CLI command

aws eks update-cluster-config \
  --region <region-code> \
  --name <my-cluster> \
  --resources-vpc-config endpointPublicAccess=true,publicAccessCidrs="203.0.113.5/32",endpointPrivateAccess=true

2. This command will:

  • Enable private access to the Kubernetes API.

  • Optionally restrict public access to specific IP addresses .

  • Ensure that the control plane endpoint is not publicly accessible unless explicitly allowed.


Implementation plan:

Using AWS Console:

  1. Create the cluster with private nodes by ensuring the worker nodes are placed in private subnets with no public IP addresses.

  2. In the VPC settings, configure the cluster to use private subnets, ensuring no public IPs are assigned to the nodes.

Using AWS CLI:

1. To create a cluster with private nodes and no public access, use the following command:

aws eks update-cluster-config --region <region-code> --name <cluster-name> --resources-vpc-config endpointPrivateAccess=true,endpointPublicAccess=false


Backout Plan:

Using AWS Console:

 1. If the nodes need to be made public, modify the subnet configurations to assign public IP addresses to the nodes. 


Using AWS CLI:

  1. Revert to Public Node Access: If enabling private nodes causes disruption or issues with the workloads, you can revert to public node access using:

    aws eks update-cluster-config \
      --region <region-code> \
      --name <my-cluster> \
      --resources-vpc-config endpointPublicAccess=true
  1. Adjust CIDR Blocks: Adjust the CIDR blocks to allow broader or more restricted access to the public endpoint if necessary.

References:

  1. EKS Cluster Endpoint Documentation