langchain_community.cache
.RedisSemanticCache¶
- class langchain_community.cache.RedisSemanticCache(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶
Cache that uses Redis as a vector-store backend.
Initialize by passing in the init GPTCache func
- Parameters
redis_url (str) – URL to connect to Redis.
embedding (Embedding) – Embedding provider for semantic encoding and search.
score_threshold (float, 0.2) –
Example:
from langchain_community.globals import set_llm_cache from langchain_community.cache import RedisSemanticCache from langchain_community.embeddings import OpenAIEmbeddings set_llm_cache(RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ))
Attributes
DEFAULT_SCHEMA
Methods
__init__
(redis_url, embedding[, score_threshold])Initialize by passing in the init GPTCache func
aclear
(**kwargs)Clear cache that can take additional keyword arguments.
alookup
(prompt, llm_string)Look up based on prompt and llm_string.
aupdate
(prompt, llm_string, return_val)Update cache based on prompt and llm_string.
clear
(**kwargs)Clear semantic cache for a given llm_string.
lookup
(prompt, llm_string)Look up based on prompt and llm_string.
update
(prompt, llm_string, return_val)Update cache based on prompt and llm_string.
- __init__(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶
Initialize by passing in the init GPTCache func
- Parameters
redis_url (str) – URL to connect to Redis.
embedding (Embedding) – Embedding provider for semantic encoding and search.
score_threshold (float, 0.2) –
Example:
from langchain_community.globals import set_llm_cache from langchain_community.cache import RedisSemanticCache from langchain_community.embeddings import OpenAIEmbeddings set_llm_cache(RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ))
- async aclear(**kwargs: Any) None ¶
Clear cache that can take additional keyword arguments.
- Parameters
kwargs (Any) –
- Return type
None
- async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] ¶
Look up based on prompt and llm_string.
A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).
- Parameters
prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.
llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
- Returns
On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).
- Return type
Optional[Sequence[Generation]]
- async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None ¶
Update cache based on prompt and llm_string.
The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.
- Parameters
prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.
llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).
- Return type
None
- clear(**kwargs: Any) None [source]¶
Clear semantic cache for a given llm_string.
- Parameters
kwargs (Any) –
- Return type
None
- lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] [source]¶
Look up based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
- Return type
Optional[Sequence[Generation]]
- update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None [source]¶
Update cache based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
return_val (Sequence[Generation]) –
- Return type
None