langchain_community.cache
.SQLiteCache¶
- class langchain_community.cache.SQLiteCache(database_path: str = '.langchain.db')[source]¶
Cache that uses SQLite as a backend.
Initialize by creating the engine and all tables.
Methods
__init__
([database_path])Initialize by creating the engine and all tables.
aclear
(**kwargs)Clear cache that can take additional keyword arguments.
alookup
(prompt, llm_string)Look up based on prompt and llm_string.
aupdate
(prompt, llm_string, return_val)Update cache based on prompt and llm_string.
clear
(**kwargs)Clear cache.
lookup
(prompt, llm_string)Look up based on prompt and llm_string.
update
(prompt, llm_string, return_val)Update based on prompt and llm_string.
- Parameters
database_path (str) –
- __init__(database_path: str = '.langchain.db')[source]¶
Initialize by creating the engine and all tables.
- Parameters
database_path (str) –
- async aclear(**kwargs: Any) None ¶
Clear cache that can take additional keyword arguments.
- Parameters
kwargs (Any) –
- Return type
None
- async alookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] ¶
Look up based on prompt and llm_string.
A cache implementation is expected to generate a key from the 2-tuple of prompt and llm_string (e.g., by concatenating them with a delimiter).
- Parameters
prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.
llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
- Returns
On a cache miss, return None. On a cache hit, return the cached value. The cached value is a list of Generations (or subclasses).
- Return type
Optional[Sequence[Generation]]
- async aupdate(prompt: str, llm_string: str, return_val: Sequence[Generation]) None ¶
Update cache based on prompt and llm_string.
The prompt and llm_string are used to generate a key for the cache. The key should match that of the look up method.
- Parameters
prompt (str) – a string representation of the prompt. In the case of a Chat model, the prompt is a non-trivial serialization of the prompt into the language model.
llm_string (str) – A string representation of the LLM configuration. This is used to capture the invocation parameters of the LLM (e.g., model name, temperature, stop tokens, max tokens, etc.). These invocation parameters are serialized into a string representation.
return_val (Sequence[Generation]) – The value to be cached. The value is a list of Generations (or subclasses).
- Return type
None
- clear(**kwargs: Any) None ¶
Clear cache.
- Parameters
kwargs (Any) –
- Return type
None
- lookup(prompt: str, llm_string: str) Optional[Sequence[Generation]] ¶
Look up based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
- Return type
Optional[Sequence[Generation]]
- update(prompt: str, llm_string: str, return_val: Sequence[Generation]) None ¶
Update based on prompt and llm_string.
- Parameters
prompt (str) –
llm_string (str) –
return_val (Sequence[Generation]) –
- Return type
None