langchain_community.cache.CassandraSemanticCache¶

class langchain_community.cache.CassandraSemanticCache(session: Optional[CassandraSession], keyspace: Optional[str], embedding: Embeddings, table_name: str = 'langchain_llm_semantic_cache', distance_metric: str = 'dot', score_threshold: float = 0.85, ttl_seconds: Optional[int] = None, skip_provisioning: bool = False, setup_mode: CassandraSetupMode = SetupMode.SYNC)[source]¶

Cache that uses Cassandra as a vector-store backend for semantic (i.e. similarity-based) lookup.

It uses a single (vector) Cassandra table and stores, in principle, cached values from several LLMs, so the LLM’s llm_string is part of the rows’ primary keys.

The similarity is based on one of several distance metrics (default: “dot”). If choosing another metric, the default threshold is to be re-tuned accordingly.

Initialize the cache with all relevant parameters. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param embedding: Embedding provider for semantic

encoding and search.

Parameters
  • table_name (str) – name of the Cassandra (vector) table to use as cache

  • distance_metric (str, 'dot') – which measure to adopt for similarity searches

  • score_threshold (optional float) – numeric value to use as cutoff for the similarity searches

  • ttl_seconds (optional int) – time-to-live for cache entries (default: None, i.e. forever)

  • session (Optional[CassandraSession]) –

  • keyspace (Optional[str]) –

  • embedding (Embedding) –

  • skip_provisioning (bool) –

  • setup_mode (CassandraSetupMode) –

The default score threshold is tuned to the default metric. Tune it carefully yourself if switching to another distance metric.

Methods

__init__(session, keyspace, embedding[, ...])

Initialize the cache with all relevant parameters. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param embedding: Embedding provider for semantic encoding and search. :type embedding: Embedding :param table_name: name of the Cassandra (vector) table to use as cache :type table_name: str :param distance_metric: which measure to adopt for similarity searches :type distance_metric: str, 'dot' :param score_threshold: numeric value to use as cutoff for the similarity searches :type score_threshold: optional float :param ttl_seconds: time-to-live for cache entries (default: None, i.e. forever) :type ttl_seconds: optional int.

aclear(**kwargs)

Clear the whole semantic cache.

adelete_by_document_id(document_id)

Given this is a "similarity search" cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID.

alookup(prompt, llm_string)

Look up based on prompt and llm_string.

alookup_with_id(prompt, llm_string)

Look up based on prompt and llm_string.

alookup_with_id_through_llm(prompt, llm[, stop])

aupdate(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

clear(**kwargs)

Clear the whole semantic cache.

delete_by_document_id(document_id)

Given this is a "similarity search" cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID.

lookup(prompt, llm_string)

Look up based on prompt and llm_string.

lookup_with_id(prompt, llm_string)

Look up based on prompt and llm_string.

lookup_with_id_through_llm(prompt, llm[, stop])

update(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

__init__(session: Optional[CassandraSession], keyspace: Optional[str], embedding: Embeddings, table_name: str = 'langchain_llm_semantic_cache', distance_metric: str = 'dot', score_threshold: float = 0.85, ttl_seconds: Optional[int] = None, skip_provisioning: bool = False, setup_mode: CassandraSetupMode = SetupMode.SYNC)[source]¶

Initialize the cache with all relevant parameters. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param embedding: Embedding provider for semantic

encoding and search.

Parameters
  • table_name (str) – name of the Cassandra (vector) table to use as cache

  • distance_metric (str, 'dot') – which measure to adopt for similarity searches

  • score_threshold (optional float) – numeric value to use as cutoff for the similarity searches

  • ttl_seconds (optional int) – time-to-live for cache entries (default: None, i.e. forever)

  • session (Optional[CassandraSession]) –

  • keyspace (Optional[str]) –

  • embedding (Embedding) –

  • skip_provisioning (bool) –

  • setup_mode (CassandraSetupMode) –

The default score threshold is tuned to the default metric. Tune it carefully yourself if switching to another distance metric.

async aclear(**kwargs: Any) None[source]¶

Clear the whole semantic cache.

Parameters

kwargs (Any) –

Return type

None

async adelete_by_document_id(document_id: str) None[source]¶

Given this is a “similarity search” cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID. This is for the second step.

Parameters

document_id (str) –

Return type

None

async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type

Optional[Sequence[Generation]]

async alookup_with_id(prompt: str, llm_string: str) Optional[Tuple[str, Sequence[Generation]]][source]¶

Look up based on prompt and llm_string. If there are hits, return (document_id, cached_entry)

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Tuple[str, Sequence[Generation]]]

async alookup_with_id_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) Optional[Tuple[str, Sequence[Generation]]][source]¶
Parameters
  • prompt (str) –

  • llm (LLM) –

  • stop (Optional[List[str]]) –

Return type

Optional[Tuple[str, Sequence[Generation]]]

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type

None

clear(**kwargs: Any) None[source]¶

Clear the whole semantic cache.

Parameters

kwargs (Any) –

Return type

None

delete_by_document_id(document_id: str) None[source]¶

Given this is a “similarity search” cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID. This is for the second step.

Parameters

document_id (str) –

Return type

None

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type

Optional[Sequence[Generation]]

lookup_with_id(prompt: str, llm_string: str) Optional[Tuple[str, Sequence[Generation]]][source]¶

Look up based on prompt and llm_string. If there are hits, return (document_id, cached_entry)

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Tuple[str, Sequence[Generation]]]

lookup_with_id_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) Optional[Tuple[str, Sequence[Generation]]][source]¶
Parameters
  • prompt (str) –

  • llm (LLM) –

  • stop (Optional[List[str]]) –

Return type

Optional[Tuple[str, Sequence[Generation]]]

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type

None

Examples using CassandraSemanticCache¶