Profile Applicability:

  • Level 1

Description:

Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive on the Kubernetes controller manager. This ensures that only the root user (or the appropriate authorized user) can read or modify the file, preventing unauthorized access to sensitive configuration data.

Rationale:

The controller-manager.conf file typically contains sensitive configuration details related to the controller manager, such as API server credentials and settings. By restricting the file permissions to 600 or more restrictive, you limit access to this file, ensuring that only authorized users can view or modify its contents, thereby improving the security posture of your Kubernetes cluster.

Impact:

Pros:

  • Enhances security by restricting access to sensitive configuration files.

  • Helps ensure that only authorized users (root or administrator) can access or modify the controller manager’s configuration.

Cons:

  • If permissions are incorrectly set or misconfigured, it may result in operational issues, such as the controller manager failing to read its configuration file.

  • Requires correct management of file permissions to avoid accidental exposure.

Default Value:

By default, the controller-manager.conf file may not have restrictive permissions. You need to manually configure the permissions to 600 or more restrictive for enhanced security.

Pre-Requisites:

  • Access to the Kubernetes controller manager’s configuration file (controller-manager.conf).

  • Sufficient privileges (root or administrator access) to modify file permissions.

Test Plan:

Using AWS Console:

  1. Sign in to the AWS Management Console.

  2. Navigate to the EC2 instances running the Kubernetes controller manager.

  3. SSH into the node running the controller manager.

Check the permissions of the controller-manager.conf file:

ls -l /etc/kubernetes/controller-manager.conf

Ensure that the file permissions are set to 600 or more restrictive. If the permissions are not set correctly, update them as needed.

Using AWS CLI:

  1. SSH into the node running the Kubernetes controller manager.

Check the permissions of the controller-manager.conf file:

ls -l /etc/kubernetes/controller-manager.conf

If the permissions are not set to 600, update the file permissions to 600:

sudo chmod 600 /etc/kubernetes/controller-manager.conf

Implementation Plan:

Using AWS Console:

  1. Sign in to the AWS Management Console and locate the EC2 instance where the controller manager is running.

  2. SSH into the node running the controller manager.

Check the current permissions of the controller-manager.conf file:

ls -l /etc/kubernetes/controller-manager.conf

If the file permissions are not set to 600, update the permissions:

sudo chmod 600 /etc/kubernetes/controller-manager.conf

Verify the updated file permissions:

ls -l /etc/kubernetes/controller-manager.conf

Using AWS CLI:

SSH into the node where the Kubernetes controller manager is running.

Check the file permissions for controller-anager.conf:

ls -l /etc/kubernetes/controller-manager.conf

If the file permissions are not set to 600, set them using the following command:

sudo chmod 600 /etc/kubernetes/controller-manager.conf

Verify that the file permissions are set to 600:

ls -l /etc/kubernetes/controller-manager.conf

Backout Plan:

Using AWS Console:

  1. Sign in to the AWS Console and locate the EC2 instance running the Kubernetes controller manager.

  2. SSH into the node running the controller manager.

If needed, revert the permissions to a less restrictive setting, such as 644:

sudo chmod 644 /etc/kubernetes/controller-manager.conf

Verify the permissions:

ls -l /etc/kubernetes/controller-manager.conf

Using AWS CLI:

SSH into the node where the controller manager is running.

Revert the file permissions to the previous setting if needed:

sudo chmod 644 /etc/kubernetes/controller-manager.conf

Verify the permissions:

ls -l /etc/kubernetes/controller-manager.conf

References: