langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity¶

class langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶

Class to handle toxicity moderation.

Methods

__init__(client[, callback, unique_id, chain_id])

validate(prompt_value[, config])

Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration.

Parameters
  • client (Any) –

  • callback (Optional[Any]) –

  • unique_id (Optional[str]) –

  • chain_id (Optional[str]) –

__init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) None[source]¶
Parameters
  • client (Any) –

  • callback (Optional[Any]) –

  • unique_id (Optional[str]) –

  • chain_id (Optional[str]) –

Return type

None

validate(prompt_value: str, config: Optional[Any] = None) str[source]¶

Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration. :param prompt_value: The text content to be checked for toxicity. :type prompt_value: str :param config: Configuration for toxicity checks and actions. :type config: Dict[str, Any]

Returns

The original prompt_value if allowed or no toxicity found.

Return type

str

Raises
  • ValueError – If the prompt contains toxic labels and cannot be

  • processed based on the configuration. –

Parameters
  • prompt_value (str) –

  • config (Optional[Any]) –