Profile Applicability:

  • Level 2

Description:

Amazon Bedrock is a fully managed service that enables users to build and scale generative AI applications. One of the security features in Amazon Bedrock is the Prompt Attack Filter, which protects against attacks targeting the AI model’s prompt inputs. The highest strength setting provides the most stringent filtering criteria to prevent attacks such as prompt injection or adversarial inputs. This SOP ensures that the Prompt Attack Filter is configured to use the highest strength for maximum protection against prompt-based attacks.

Rationale:

  • Security: The highest strength setting for the Prompt Attack Filter ensures that malicious or adversarial inputs are effectively blocked. This prevents prompt manipulation attempts that could result in unexpected model behavior or security vulnerabilities.

  • Protection Against Manipulation: Prompt injections are a known vulnerability for generative AI models, and the highest strength filter significantly reduces the risk of exploitation.

  • Compliance: In high-security environments, such as those requiring HIPAA or PCI-DSS compliance, enabling the highest strength filter helps meet the stringent security standards for data protection.

  • Operational Integrity: Applying the highest strength ensures the AI model behaves as expected and reduces the risk of unpredictable outputs, ensuring operational safety.

Impact:

Pros:

  • Robust Security: Ensures strong protection against prompt-based attacks, securing the AI model’s behavior.

  • Improved Model Integrity: By filtering out malicious input, the AI model is less likely to generate harmful or unintended content.

  • Compliance Support: Meets compliance requirements for AI safety in regulated industries.

Cons:

  • Potential False Positives: The highest strength filter may occasionally block legitimate inputs, which might require reviewing input validation processes.

  • Performance Overhead: The additional filtering might slightly affect the latency of responses, though this impact is typically minimal.

Default Value:

By default, Amazon Bedrock may not have the highest strength for the Prompt Attack Filter enabled. The filter strength is set to a default or moderate strength, which can be adjusted according to the security needs of the organization.

Pre-requisite:

  • AWS IAM Permissions:

    • bedrock:DescribeGuardrails

    • bedrock:UpdateGuardrails

  • AWS CLI installed and configured.

  • Access to Amazon Bedrock and an existing AI model with guardrails set up.

Test Plan:

Using AWS Console:

  1. Sign in to the AWS Management Console.

  2. Navigate to Amazon Bedrock under Services.

  3. In the Amazon Bedrock Console, go to Guardrails.

  4. Review the current guardrails settings for the Prompt Attack Filter.

  5. If the filter is not set to the highest strength, modify the settings:

    • Select the highest strength filter for the Prompt Attack Filter.

  6. Save the settings and verify that the filter strength is updated to highest.

  7. Test the model’s performance with different inputs to verify that legitimate inputs are not blocked, and security is maintained.

Using AWS CLI:

  1. To check the Prompt Attack Filter configuration for a guardrail, run:

    aws bedrock describe-guardrails --query 'Guardrails[*].{GuardrailId:GuardrailId,PromptAttackFilterStrength:PromptAttackFilterStrength}'

  2. Review the output:

  • If the PromptAttackFilterStrength is not set to highest, update the configuration.

  1. To change the Prompt Attack Filter to the highest strength, run:

    aws bedrock update-guardrail --guardrail-id <guardrail-id> --prompt-attack-filter-strength highest

  2. Verify the filter strength:

    aws bedrock describe-guardrails --query 'Guardrails[*].{GuardrailId:GuardrailId,PromptAttackFilterStrength:PromptAttackFilterStrength}'

Implementation Steps:

Using AWS Console:

  1. Log in to Amazon Bedrock and navigate to the Guardrails section.

  2. Select the AI model guardrail to modify the Prompt Attack Filter.

  3. Set the Prompt Attack Filter strength to highest.

  4. Save and apply the changes.

  5. Verify that the filter strength is applied and test the model behavior.

Using AWS CLI:

  1. To check the Prompt Attack Filter strength for an existing guardrail, run:

    aws bedrock describe-guardrails --query 'Guardrails[*].{GuardrailId:GuardrailId,PromptAttackFilterStrength:PromptAttackFilterStrength}'

  2. Update the Prompt Attack Filter to the highest strength:

    aws bedrock update-guardrail --guardrail-id <guardrail-id> --prompt-attack-filter-strength highest

  3. Verify that the filter strength has been updated:

    aws bedrock describe-guardrails --query 'Guardrails[*].{GuardrailId:GuardrailId,PromptAttackFilterStrength:PromptAttackFilterStrength}'

Backout Plan:

Using AWS Console:

  1. If enabling the Highest Strength Prompt Attack Filter causes issues, sign in to the AWS Management Console.

  2. Navigate to Amazon Bedrock and go to the Guardrails section.

  3. Modify the Prompt Attack Filter settings and set the strength to a lower level or disable the filter.

  4. Save the changes and verify that the filter is no longer at Highest Strength.

Using AWS CLI:

  1. To revert the Prompt Attack Filter to a lower strength or disable it, use:

    aws bedrock put-prompt-attack-filter --filter-strength "low" --region <REGION>

  2. Verify the changes have been applied:

    aws bedrock describe-guardrails --region <REGION>

References:

CIS Controls Mapping:

Version

Control ID

Control Description

IG1

IG2

IG3

v8

3.4

Encrypt Data on End-User Devices – Ensure data encryption during file system access.

v8

6.7

Implement Application Layer Filtering and Content Control – Ensure appropriate content filtering is applied to sensitive files.

v8

6.8

Define and Maintain Role-Based Access Control – Implement and manage role-based access for file systems.

v8

14.6

Protect Information Through Access Control Lists – Apply strict access control to file systems.