langchain_core.language_models.llms
.get_promptsΒΆ
- langchain_core.language_models.llms.get_prompts(params: Dict[str, Any], prompts: List[str], cache: Optional[Union[BaseCache, bool]] = None) Tuple[Dict[int, List], str, List[int], List[str]] [source]ΒΆ
Get prompts that are already cached.
- Parameters
params (Dict[str, Any]) β Dictionary of parameters.
prompts (List[str]) β List of prompts.
cache (Optional[Union[BaseCache, bool]]) β Cache object. Default is None.
- Returns
- A tuple of existing prompts, llm_string, missing prompt indexes,
and missing prompts.
- Raises
ValueError β If the cache is not set and cache is True.
- Return type
Tuple[Dict[int, List], str, List[int], List[str]]