Profile Applicability:
• Level 1

Description:
 Ensure that anonymous requests to the Kubelet server are disabled. Anonymous access should be restricted, relying on authentication mechanisms to authorize access. 

Rationale:
When anonymous authentication is enabled, requests that bypass other authentication methods are treated as anonymous. Allowing such access may expose the cluster to unauthorized access or compromise the security of sensitive information in the Kubernetes cluster. Disabling anonymous auth is essential to ensuring that access to the Kubelet is properly authenticated. 

Impact:
 Pros:

  • Prevents unauthenticated access to Kubelet APIs.

  • Strengthens node-level access control and reduces exposure.

  • Complies with Kubernetes security best practices.

Cons:

  • May impact ease of access for tools or users relying on anonymous access for debugging or automation.

  • Slight overhead introduced by enforcing authentication.

Default Value:
 Refer to the AWS EKS or base distribution documentation. Default behavior may differ depending on OS and AMI.

Pre-requisites:

  • SSH or Session Manager access to nodes.

  • Permissions to view and modify Kubelet configuration files or startup parameters.

  • Kubernetes access to run privileged pods for inspection (if applicable).

Remediation

Test Plan:

Using AWS Console:

  1. Navigate to Amazon EC2 > Instances and connect to each worker node via Session Manager or EC2 Connect.

  2. Run the following command to find the active Kubelet process:

    • ps -ef | grep kubelet

  3. Identify the path to the config file (look for --config).

  4. View the config file:

    • sudo less /path/to/kubelet-config.json

  5. Confirm that the following entry exists:
     "authentication": { "anonymous": { "enabled": false } }

  6. Also check for command-line override:

    • Look for --anonymous-auth=false in the Kubelet arguments.

Using AWS CLI:

  1. Get the list of nodes:

    kubectl get nodes
  1. Start a proxy to access /configz endpoint:

    kubectl proxy --port=8080
  1. In another terminal, for each node:

    export NODE_NAME=my-node-name
    curl http://localhost:8080/api/v1/nodes/${NODE_NAME}/proxy/configz


  1. Search the output for this JSON block and confirm:

    "authentication": { "anonymous": { "enabled": false } }

Implementation Plan

Using AWS Console:

  1. Connect to each worker node via EC2 or Session Manager.

  2. Locate the Kubelet config file using ps -ef | grep kubelet.

  3. Edit the file and add the following line under authentication:

"authentication": { "anonymous": { "enabled": false } }

  1. Save and close the file.

  2. If using systemd, also edit the systemd drop-in config file located at:

    • /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf

  3. Add or confirm the following line is present:

    • --anonymous-auth=false

  4. Reload and restart the kubelet service.

Using AWS CLI:

  1. Edit the configuration file on the node or via automation tools.

  2. Add the following content if missing:

    "authentication": { "anonymous": { "enabled": false } }
  1. Or if passed as a startup parameter, update the kubelet systemd args file:

    --anonymous-auth=false
  1. Reload and restart services:

    systemctl daemon-reload
     systemctl restart kubelet.service
    systemctl status kubelet -l

Backout Plan

Using AWS Console:

  1. Access the node via Session Manager or EC2 Connect.

  2. Restore the previous configuration file.

  3. If the change was applied via systemd unit files, revert the systemd arguments to their original state.

  4. Restart the kubelet service.

Using AWS CLI:

  1. Restore the prior config or remove the line:

    "authentication": { "anonymous": { "enabled": false } }
  1. Or remove the --anonymous-auth=false line from the systemd unit file.

  2. Reload and restart the kubelet:

    systemctl daemon-reload
    systemctl restart kubelet.service

References:

  1. Kubelet CLI Reference

  2. Kubelet Authentication

  3. Kubelet Config API