langchain_community.cache.CassandraCache¶

class langchain_community.cache.CassandraCache(session: Optional[CassandraSession] = None, keyspace: Optional[str] = None, table_name: str = 'langchain_llm_cache', ttl_seconds: Optional[int] = None, skip_provisioning: bool = False, setup_mode: CassandraSetupMode = SetupMode.SYNC)[source]¶

Cache that uses Cassandra / Astra DB as a backend.

It uses a single Cassandra table. The lookup keys (which get to form the primary key) are:

  • prompt, a string

  • llm_string, a deterministic str representation of the model parameters. (needed to prevent collisions same-prompt-different-model collisions)

Initialize with a ready session and a keyspace name. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param table_name: name of the Cassandra table to use as cache :type table_name: str :param ttl_seconds: time-to-live for cache entries

(default: None, i.e. forever)

Parameters
  • session (Optional[CassandraSession]) –

  • keyspace (Optional[str]) –

  • table_name (str) –

  • ttl_seconds (optional int) –

  • skip_provisioning (bool) –

  • setup_mode (CassandraSetupMode) –

Methods

__init__([session, keyspace, table_name, ...])

Initialize with a ready session and a keyspace name. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param table_name: name of the Cassandra table to use as cache :type table_name: str :param ttl_seconds: time-to-live for cache entries (default: None, i.e. forever) :type ttl_seconds: optional int.

aclear(**kwargs)

Clear cache.

alookup(prompt, llm_string)

Look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

clear(**kwargs)

Clear cache.

delete(prompt, llm_string)

Evict from cache if there's an entry.

delete_through_llm(prompt, llm[, stop])

A wrapper around delete with the LLM being passed.

lookup(prompt, llm_string)

Look up based on prompt and llm_string.

update(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

__init__(session: Optional[CassandraSession] = None, keyspace: Optional[str] = None, table_name: str = 'langchain_llm_cache', ttl_seconds: Optional[int] = None, skip_provisioning: bool = False, setup_mode: CassandraSetupMode = SetupMode.SYNC)[source]¶

Initialize with a ready session and a keyspace name. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param table_name: name of the Cassandra table to use as cache :type table_name: str :param ttl_seconds: time-to-live for cache entries

(default: None, i.e. forever)

Parameters
  • session (Optional[CassandraSession]) –

  • keyspace (Optional[str]) –

  • table_name (str) –

  • ttl_seconds (optional int) –

  • skip_provisioning (bool) –

  • setup_mode (CassandraSetupMode) –

async aclear(**kwargs: Any) None[source]¶

Clear cache. This is for all LLMs at once.

Parameters

kwargs (Any) –

Return type

None

async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type

Optional[Sequence[Generation]]

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type

None

clear(**kwargs: Any) None[source]¶

Clear cache. This is for all LLMs at once.

Parameters

kwargs (Any) –

Return type

None

delete(prompt: str, llm_string: str) None[source]¶

Evict from cache if there’s an entry.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

None

delete_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) None[source]¶

A wrapper around delete with the LLM being passed. In case the llm.invoke(prompt) calls have a stop param, you should pass it here

Parameters
  • prompt (str) –

  • llm (LLM) –

  • stop (Optional[List[str]]) –

Return type

None

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type

Optional[Sequence[Generation]]

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type

None

Examples using CassandraCache¶