langchain_community.cache.OpenSearchSemanticCache¶

class langchain_community.cache.OpenSearchSemanticCache(opensearch_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶

Cache that uses OpenSearch vector store backend

Parameters
  • opensearch_url (str) – URL to connect to OpenSearch.

  • embedding (Embedding) – Embedding provider for semantic encoding and search.

  • score_threshold (float, 0.2) –

Example: .. code-block:: python

import langchain from langchain.cache import OpenSearchSemanticCache from langchain.embeddings import OpenAIEmbeddings langchain.llm_cache = OpenSearchSemanticCache(

opensearch_url=”http//localhost:9200”, embedding=OpenAIEmbeddings()

)

Methods

__init__(opensearch_url, embedding[, ...])

param opensearch_url

URL to connect to OpenSearch.

aclear(**kwargs)

Clear cache that can take additional keyword arguments.

alookup(prompt, llm_string)

Look up based on prompt and llm_string.

aupdate(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

clear(**kwargs)

Clear semantic cache for a given llm_string.

lookup(prompt, llm_string)

Look up based on prompt and llm_string.

update(prompt, llm_string, return_val)

Update cache based on prompt and llm_string.

__init__(opensearch_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶
Parameters
  • opensearch_url (str) – URL to connect to OpenSearch.

  • embedding (Embedding) – Embedding provider for semantic encoding and search.

  • score_threshold (float, 0.2) –

Example: .. code-block:: python

import langchain from langchain.cache import OpenSearchSemanticCache from langchain.embeddings import OpenAIEmbeddings langchain.llm_cache = OpenSearchSemanticCache(

opensearch_url=”http//localhost:9200”, embedding=OpenAIEmbeddings()

)

async aclear(**kwargs: Any) None¶

Clear cache that can take additional keyword arguments.

Parameters

kwargs (Any) –

Return type

None

async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]]¶

Look up based on prompt and llm_string.

A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

Returns

On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).

Return type

Optional[Sequence[Generation]]

async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None¶

Update cache based on prompt and llm_string.

The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.

Parameters
  • prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.

  • llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.

  • return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).

Return type

None

clear(**kwargs: Any) None[source]¶

Clear semantic cache for a given llm_string.

Parameters

kwargs (Any) –

Return type

None

lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]][source]¶

Look up based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

Return type

Optional[Sequence[Generation]]

update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None[source]¶

Update cache based on prompt and llm_string.

Parameters
  • prompt (str) –

  • llm_string (str) –

  • return_val (Sequence[Generation]) –

Return type

None