langchain_core.language_models.llms.get_promptsΒΆ

langchain_core.language_models.llms.get_prompts(params: Dict[str, Any], prompts: List[str], cache: Optional[Union[BaseCache, bool]] = None) Tuple[Dict[int, List], str, List[int], List[str]][source]ΒΆ

Get prompts that are already cached.

Parameters
  • params (Dict[str, Any]) – Dictionary of parameters.

  • prompts (List[str]) – List of prompts.

  • cache (Optional[Union[BaseCache, bool]]) – Cache object. Default is None.

Returns

A tuple of existing prompts, llm_string, missing prompt indexes,

and missing prompts.

Raises

ValueError – If the cache is not set and cache is True.

Return type

Tuple[Dict[int, List], str, List[int], List[str]]