Profile Applicability:

  • Level 1

Description:

LLM Jacking (Large Language Model Jacking) refers to the unauthorized or malicious manipulation of AI-driven systems or services, including language models, in such a way that they are coerced or tricked into producing undesirable, incorrect, or malicious outcomes. In the context of AWS CloudTrail, LLM Jacking threats can occur when an attacker uses CloudTrail logs or APIs to gain insight into system misconfigurations, identity mismanagement, or improper access permissions to escalate privileges or tamper with services.

The CloudTrail logs record all API calls made within your AWS environment, which includes interaction with LLMs, such as those within services like Amazon Comprehend, SageMaker, and other machine learning services. Ensuring CloudTrail logs are properly monitored and analyzed is critical for identifying potential LLM Jacking activities, where attackers might misuse these services or attempt to circumvent security controls.

This SOP ensures that CloudTrail is configured to log and monitor potential LLM Jacking threats, ensuring that actions related to ML models and AI services are captured, secured, and subject to auditing.

Rationale:

By securing CloudTrail logs and actively monitoring for any potential LLM Jacking threats, you are ensuring:

  • Detection of Malicious Activities: Identifying attempts to manipulate or exploit machine learning models, either through misuse of API calls or tampering with configuration settings.

  • Secure Model Management: Protecting your AI/ML models (e.g., training data, access permissions) from unauthorized access or modification.

  • Auditability and Compliance: Enhancing the ability to audit ML model actions and comply with security standards such as SOC 2, HIPAA, or PCI-DSS, which often require traceable logs for AI/ML-related actions.

Impact:

Pros:

  • Enhanced Security: Protects against potential attacks or abuse of AI/ML systems through unauthorized access or manipulation.

  • Improved Incident Response: Ensures that suspicious actions are logged, making it easier for security teams to investigate and respond.

  • Compliance: Helps meet regulatory requirements by providing logs of AI/ML model usage and access, which is crucial for auditing purposes.

Cons:

  • Increased Complexity: Requires continuous monitoring and configuration of CloudTrail logs to detect potential threats, which adds complexity to your security operations.

  • False Positives: Monitoring for potential LLM Jacking could generate false positives in cases where legitimate API calls are made to ML services.

Default Value:

By default, CloudTrail logs all API calls, but it may not be configured to specifically detect or report on activities related to machine learning models. Therefore, it must be manually configured to monitor actions related to AI/ML services, including SageMaker, Comprehend, and other services that use LLMs.

Pre-requisite:

  • AWS IAM Permissions:

    • cloudtrail:DescribeTrails

    • cloudtrail:LookupEvents

    • cloudtrail:GetTrailStatus

    • cloudtrail:StartLogging

    • sagemaker:DescribeEndpoint

    • comprehend:DetectSentiment

  • AWS CLI installed and configured.

  • Basic knowledge of AWS CloudTrail, IAM permissions, and AI/ML services such as SageMaker and Comprehend.

Remediation:

Test Plan:

Using AWS Console:

  1. Sign in to the AWS Management Console.

  2. Navigate to CloudTrail under Services.

  3. In the CloudTrail Dashboard, go to Trails and select the trail you want to review.

  4. Review the configuration and ensure that CloudTrail logs all relevant events, including those related to machine learning models like SageMaker, Comprehend, and any other LLM-based services.

  5. Ensure that the S3 bucket used for storing logs has the appropriate access controls and is not publicly accessible.

  6. In the Event History, search for events related to ML model actions:

    • Look for API calls like CreateEndpointStartTrainingJobDetectSentiment, etc.

    • Ensure that these events are being logged and monitored properly.

Using AWS CLI:

To review CloudTrail events for ML models, run:

aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=CreateEndpoint

Review the event logs for any suspicious or unauthorized actions related to machine learning services (e.g., unauthorized changes to SageMaker endpoints or other LLM-related actions).

To check if CloudTrail is logging events for the relevant ML services, run:

aws cloudtrail describe-trails --query 'trailList[*].TrailARN'

Ensure the S3 bucket storing the logs is not publicly accessible and has proper IAM policies in place to restrict who can access or modify the logs.

Implementation Steps:

Using AWS Console:

  1. Sign in to the AWS Management Console and navigate to CloudTrail.

  2. Ensure CloudTrail is enabled for all regions where ML services are used.

  3. Go to Trails, and select the trail to review or create a new trail.

  4. Ensure that management events and data events related to machine learning services (e.g., SageMaker, Comprehend) are enabled for logging.

  5. Ensure that the S3 bucket used for storing CloudTrail logs has restricted access, allowing only authorized users and roles to access the logs.

  6. Enable CloudWatch Logs integration to monitor for abnormal or suspicious activities related to LLM actions.

Using AWS CLI:

To enable CloudTrail to log ML-related actions, use the following command:

aws cloudtrail update-trail --name <trail-name> --is-multi-region-trail true --include-management-events --event-selector '{ "ReadWriteType": "All", "IncludeManagementEvents": true, "DataResources": [{"Type": "AWS::SageMaker::Endpoint", "Values": ["arn:aws:sagemaker:<region>:<account-id>:endpoint/*"]}] }'


Verify that CloudTrail is capturing the correct SageMaker, Comprehend, or other LLM actions:

aws cloudtrail describe-trails --query 'trailList[*].InsightSelectors'

Ensure proper IAM permissions are applied to allow authorized users to access CloudTrail logs, while ensuring access is restricted to avoid privilege escalation.

Backout Plan:

If monitoring for potential LLM Jacking causes operational issues (e.g., unwanted alerts or high data costs):

  1. Identify the affected CloudTrail trail.

  2. Revert the changes to the IAM policies or CloudTrail configurations that are generating excessive alerts.

  3. If needed, disable CloudWatch Logs integration or adjust the level of logging for ML events.

Note :

  • Alerting: Consider setting up CloudWatch Alarms to notify security teams when suspicious activity is detected in CloudTrail logs related to LLM manipulation or unexpected API calls to ML services.

  • Audit Logs: Ensure S3 bucket logs storing CloudTrail logs are secured with encryption to prevent unauthorized access to sensitive event data.

References:

CIS Controls Mapping:

Version

Control ID

Control Description

IG1

IG2

IG3

v8

3.4

Encrypt Data on End-User Devices – Ensure data encryption during file system access.

v8

6.7

Implement Application Layer Filtering and Content Control – Ensure appropriate content filtering is applied to sensitive files.

v8

6.8

Define and Maintain Role-Based Access Control – Implement and manage role-based access for file systems.

v8

14.6

Protect Information Through Access Control Lists – Apply strict access control to file systems.