langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety

class langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]

Class to handle prompt safety moderation.

Methods

__init__(client[, callback, unique_id, chain_id])

validate(prompt_value[, config])

Check and validate the safety of the given prompt text.

Parameters
  • client (Any) –

  • callback (Optional[Any]) –

  • unique_id (Optional[str]) –

  • chain_id (Optional[str]) –

__init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) None[source]
Parameters
  • client (Any) –

  • callback (Optional[Any]) –

  • unique_id (Optional[str]) –

  • chain_id (Optional[str]) –

Return type

None

validate(prompt_value: str, config: Optional[Any] = None) str[source]

Check and validate the safety of the given prompt text.

Parameters
  • prompt_value (str) – The input text to be checked for unsafe text.

  • config (Dict[str, Any]) – Configuration settings for prompt safety checks.

Raises
  • ValueError – If unsafe prompt is found in the prompt text based

  • on the specified threshold.

Returns

The input prompt_value.

Return type

str

Note

This function checks the safety of the provided prompt text using Comprehend’s classify_document API and raises an error if unsafe text is detected with a score above the specified threshold.

Example

comprehend_client = boto3.client(‘comprehend’) prompt_text = “Please tell me your credit card information.” config = {“threshold”: 0.7} checked_prompt = check_prompt_safety(comprehend_client, prompt_text, config)