langchain_core.language_models.llms.update_cacheΒΆ

langchain_core.language_models.llms.update_cache(cache: Optional[Union[BaseCache, bool]], existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) Optional[dict][source]ΒΆ

Update the cache and get the LLM output.

Parameters
  • cache (Optional[Union[BaseCache, bool]]) –

  • existing_prompts (Dict[int, List]) –

  • llm_string (str) –

  • missing_prompt_idxs (List[int]) –

  • new_results (LLMResult) –

  • prompts (List[str]) –

Return type

Optional[dict]