langchain_community.cache.MomentoCache¶

class langchain_community.cache.MomentoCache(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶

Cache that uses Momento as a backend. See https://gomomento.com/

Instantiate a prompt cache using Momento as a backend.

Note: to instantiate the cache client passed to MomentoCache, you must have a Momento account. See https://gomomento.com/.

Parameters
  • cache_client (CacheClient) – The Momento cache client.

  • cache_name (str) – The name of the cache to use to store the data.

  • ttl (Optional[timedelta], optional) – The time to live for the cache items. Defaults to None, ie use the client default TTL.

  • ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist. Defaults to True.

Raises
  • ImportError – Momento python package is not installed.

  • TypeError – cache_client is not of type momento.CacheClientObject

  • ValueError – ttl is non-null and non-negative

Methods

__init__(cache_client, cache_name, *[, ttl, ...])

Instantiate a prompt cache using Momento as a backend.

aclear(**kwargs)

Clear cache that can take additional keyword arguments.

alookup(prompt, llm_string)

Look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

clear(**kwargs)

Clear the cache.

from_client_params(cache_name, ttl, *[, ...])

Construct cache from CacheClient parameters.

lookup(prompt, llm_string)

Lookup llm generations in cache by prompt and associated model and settings.

update(prompt, llm_string, return_val)

Store llm generations in cache.

__init__(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶

Instantiate a prompt cache using Momento as a backend.

Note: to instantiate the cache client passed to MomentoCache, you must have a Momento account. See https://gomomento.com/.

Parameters
  • cache_client (CacheClient) – The Momento cache client.

  • cache_name (str) – The name of the cache to use to store the data.

  • ttl (Optional[timedelta], optional) – The time to live for the cache items. Defaults to None, ie use the client default TTL.

  • ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist. Defaults to True.

Raises
  • ImportError – Momento python package is not installed.

  • TypeError – cache_client is not of type momento.CacheClientObject

  • ValueError – ttl is non-null and non-negative

async aclear(**kwargs: Any) None¶

Clear cache that can take additional keyword arguments.

Parameters

kwargs (Any) –

Return type

None

async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]]¶

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type

Optional[Sequence[Generation]]

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None¶

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type

None

clear(**kwargs: Any) None[source]¶

Clear the cache.

Raises

SdkException – Momento service or network error

Parameters

kwargs (Any) –

Return type

None

classmethod from_client_params(cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, api_key: Optional[str] = None, auth_token: Optional[str] = None, **kwargs: Any) MomentoCache[source]¶

Construct cache from CacheClient parameters.

Parameters
  • cache_name (str) –

  • ttl (timedelta) –

  • configuration (Optional[momento.config.Configuration]) –

  • api_key (Optional[str]) –

  • auth_token (Optional[str]) –

  • kwargs (Any) –

Return type

MomentoCache

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Lookup llm generations in cache by prompt and associated model and settings.

Parameters
  • prompt (str) – The prompt run through the language model.

  • llm_string (str) – The language model version and settings.

Raises

SdkException – Momento service or network error

Returns

A list of language model generations.

Return type

Optional[RETURN_VAL_TYPE]

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Store llm generations in cache.

Parameters
  • prompt (str) – The prompt run through the language model.

  • llm_string (str) – The language model string.

  • return_val (RETURN_VAL_TYPE) – A list of language model generations.

Raises
  • SdkException – Momento service or network error

  • Exception – Unexpected response

Return type

None

Examples using MomentoCache¶