langchain_core.language_models.base.BaseLanguageModel

Note

BaseLanguageModel implements the standard Runnable Interface. 🏃

class langchain_core.language_models.base.BaseLanguageModel[source]

Bases: RunnableSerializable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], LanguageModelOutputVar], ABC

Abstract base class for interfacing with language models.

All language model wrappers inherit from BaseLanguageModel.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param cache: Union[BaseCache, bool, None] = None

Whether to cache the response.

  • If true, will use the global cache.

  • If false, will not use a cache

  • If None, will use the global cache if it’s set, otherwise no cache.

  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

param callbacks: Callbacks = None

Callbacks to add to the run trace.

param custom_get_token_ids: Optional[Callable[[str], List[int]]] = None

Optional encoder to use for counting tokens.

param metadata: Optional[Dict[str, Any]] = None

Metadata to add to the run trace.

param tags: Optional[List[str]] = None

Tags to add to the run trace.

param verbose: bool [Optional]

Whether to print out response text.

abstract async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Callbacks = None, **kwargs: Any) LLMResult[source]

Asynchronously pass a sequence of prompts and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Callbacks) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

abstract async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str[source]

[Deprecated] Asynchronously pass a string to the model and return a string.

Use this method when calling pure text generation models and only the top

candidate generation is needed.

Parameters
  • text (str) – String input to pass to the model.

  • stop (Optional[Sequence[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

Top model prediction as a string.

Return type

str

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

abstract async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage[source]

[Deprecated] Asynchronously pass messages to the model and return a message.

Use this method when calling chat models and only the top

candidate generation is needed.

Parameters
  • messages (List[BaseMessage]) – A sequence of chat messages corresponding to a single model input.

  • stop (Optional[Sequence[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

Top model prediction as a message.

Return type

BaseMessage

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

abstract generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Callbacks = None, **kwargs: Any) LLMResult[source]

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Callbacks) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

get_num_tokens(text: str) int[source]

Get the number of tokens present in the text.

Useful for checking if an input will fit in a model’s context window.

Parameters

text (str) – The string input to tokenize.

Returns

The integer number of tokens in the text.

Return type

int

get_num_tokens_from_messages(messages: List[BaseMessage]) int[source]

Get the number of tokens in the messages.

Useful for checking if an input will fit in a model’s context window.

Parameters

messages (List[BaseMessage]) – The message inputs to tokenize.

Returns

The sum of the number of tokens across the messages.

Return type

int

get_token_ids(text: str) List[int][source]

Return the ordered ids of the tokens in a text.

Parameters

text (str) – The string input to tokenize.

Returns

A list of ids corresponding to the tokens in the text, in order they occur

in the text.

Return type

List[int]

abstract predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str[source]

[Deprecated] Pass a single string input to the model and return a string.

Use this method when passing in raw text. If you want to pass in specific

types of chat messages, use predict_messages.

Parameters
  • text (str) – String input to pass to the model.

  • stop (Optional[Sequence[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

Top model prediction as a string.

Return type

str

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

abstract predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage[source]

[Deprecated] Pass a message sequence to the model and return a message.

Use this method when passing in chat messages. If you want to pass in raw text,

use predict.

Parameters
  • messages (List[BaseMessage]) – A sequence of chat messages corresponding to a single model input.

  • stop (Optional[Sequence[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

Top model prediction as a message.

Return type

BaseMessage

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]][source]

Not implemented on this class.

Parameters
  • schema (Union[Dict, Type[BaseModel]]) –

  • kwargs (Any) –

Return type

Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]