langchain.memory.token_buffer
.ConversationTokenBufferMemory¶
- class langchain.memory.token_buffer.ConversationTokenBufferMemory[source]¶
Bases:
BaseChatMemory
Conversation chat memory with token limit.
- param ai_prefix: str = 'AI'¶
- param chat_memory: BaseChatMessageHistory [Optional]¶
- param human_prefix: str = 'Human'¶
- param input_key: Optional[str] = None¶
- param llm: BaseLanguageModel [Required]¶
- param max_token_limit: int = 2000¶
- param memory_key: str = 'history'¶
- param output_key: Optional[str] = None¶
- param return_messages: bool = False¶
- async aclear() None ¶
Clear memory contents.
- Return type
None
- async aload_memory_variables(inputs: Dict[str, Any]) Dict[str, Any] ¶
Async return key-value pairs given the text input to the chain.
- Parameters
inputs (Dict[str, Any]) – The inputs to the chain.
- Returns
A dictionary of key-value pairs.
- Return type
Dict[str, Any]
- async asave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None ¶
Save context from this conversation to buffer.
- Parameters
inputs (Dict[str, Any]) –
outputs (Dict[str, str]) –
- Return type
None
- clear() None ¶
Clear memory contents.
- Return type
None
- load_memory_variables(inputs: Dict[str, Any]) Dict[str, Any] [source]¶
Return history buffer.
- Parameters
inputs (Dict[str, Any]) –
- Return type
Dict[str, Any]
- save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None [source]¶
Save context from this conversation to buffer. Pruned.
- Parameters
inputs (Dict[str, Any]) –
outputs (Dict[str, str]) –
- Return type
None
- property buffer: Any¶
String buffer of memory.
- property buffer_as_messages: List[BaseMessage]¶
Exposes the buffer as a list of messages in case return_messages is True.
- property buffer_as_str: str¶
Exposes the buffer as a string in case return_messages is False.