langchain_core.language_models.llms.aget_prompts

async langchain_core.language_models.llms.aget_prompts(params: Dict[str, Any], prompts: List[str], cache: Optional[Union[BaseCache, bool]] = None) Tuple[Dict[int, List], str, List[int], List[str]][source]

Get prompts that are already cached. Async version.

Parameters
  • params (Dict[str, Any]) –

  • prompts (List[str]) –

  • cache (Optional[Union[BaseCache, bool]]) –

Return type

Tuple[Dict[int, List], str, List[int], List[str]]