LangChain
Core
Community
Experimental
Text splitters
ai21
airbyte
anthropic
astradb
azure-dynamic-sessions
chroma
cohere
elasticsearch
exa
fireworks
google-genai
google-vertexai
groq
huggingface
ibm
mistralai
mongodb
nomic
nvidia-ai-endpoints
nvidia-trt
openai
pinecone
postgres
prompty
qdrant
robocorp
together
upstage
voyageai
Partner libs
ai21
airbyte
anthropic
astradb
azure-dynamic-sessions
chroma
cohere
elasticsearch
exa
fireworks
google-genai
google-vertexai
groq
huggingface
ibm
mistralai
mongodb
nomic
nvidia-ai-endpoints
nvidia-trt
openai
pinecone
postgres
prompty
qdrant
robocorp
together
upstage
voyageai
Docs
Toggle Menu
Prev
Up
Next
langchain_core.globals
.get_llm_cache
get_llm_cache()
langchain_core.globals
.get_llm_cache
¶
langchain_core.globals.
get_llm_cache
(
)
→
BaseCache
[source]
¶
Get the value of the
llm_cache
global setting.
Return type
BaseCache