langchain_community.llms.yandex.YandexGPT

Note

YandexGPT implements the standard Runnable Interface. 🏃

class langchain_community.llms.yandex.YandexGPT[source]

Bases: _BaseYandexGPT, LLM

Yandex large language models.

To use, you should have the yandexcloud python package installed.

There are two authentication options for the service account with the ai.languageModels.user role:

  • You can specify the token in a constructor parameter iam_token

or in an environment variable YC_IAM_TOKEN. - You can specify the key in a constructor parameter api_key or in an environment variable YC_API_KEY.

To use the default model specify the folder ID in a parameter folder_id or in an environment variable YC_FOLDER_ID.

Or specify the model URI in a constructor parameter model_uri

Example

from langchain_community.llms import YandexGPT
yandex_gpt = YandexGPT(iam_token="t1.9eu...", folder_id="b1g...")

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param api_key: SecretStr = ''

Yandex Cloud Api Key for service account with the ai.languageModels.user role

Constraints
  • type = string

  • writeOnly = True

  • format = password

param cache: Union[BaseCache, bool, None] = None

Whether to cache the response.

  • If true, will use the global cache.

  • If false, will not use a cache

  • If None, will use the global cache if it’s set, otherwise no cache.

  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

param callback_manager: Optional[BaseCallbackManager] = None

[DEPRECATED]

param callbacks: Callbacks = None

Callbacks to add to the run trace.

param custom_get_token_ids: Optional[Callable[[str], List[int]]] = None

Optional encoder to use for counting tokens.

param disable_request_logging: bool = False

YandexGPT API logs all request data by default. If you provide personal data, confidential information, disable logging.

param folder_id: str = ''

Yandex Cloud folder ID

param iam_token: SecretStr = ''

Yandex Cloud IAM token for service or user account with the ai.languageModels.user role

Constraints
  • type = string

  • writeOnly = True

  • format = password

param max_retries: int = 6

Maximum number of retries to make when generating.

param max_tokens: int = 7400

Sets the maximum limit on the total number of tokens used for both the input prompt and the generated response. Must be greater than zero and not exceed 7400 tokens.

param metadata: Optional[Dict[str, Any]] = None

Metadata to add to the run trace.

param model_name: str = 'yandexgpt-lite'

Model name to use.

param model_uri: str = ''

Model uri to use.

param model_version: str = 'latest'

Model version to use.

param sleep_interval: float = 1.0

Delay between API requests

param stop: Optional[List[str]] = None

Sequences when completion generation will stop.

param tags: Optional[List[str]] = None

Tags to add to the run trace.

param temperature: float = 0.6

What sampling temperature to use. Should be a double number between 0 (inclusive) and 1 (inclusive).

param url: str = 'llm.api.cloud.yandex.net:443'

The url of the API.

param verbose: bool [Optional]

Whether to print out response text.

__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) str

[Deprecated] Check Cache and run the LLM on the given prompt and input.

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • prompt (str) –

  • stop (Optional[List[str]]) –

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) –

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • kwargs (Any) –

Return type

str

async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[str]) – List of string prompts.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[Union[List[str], List[List[str]]]]) –

  • metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –

  • run_name (Optional[Union[str, List[str]]]) –

  • run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[str]) – List of string prompts.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[Union[List[str], List[List[str]]]]) –

  • metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –

  • run_name (Optional[Union[str, List[str]]]) –

  • run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

get_num_tokens(text: str) int

Get the number of tokens present in the text.

Useful for checking if an input will fit in a model’s context window.

Parameters

text (str) – The string input to tokenize.

Returns

The integer number of tokens in the text.

Return type

int

get_num_tokens_from_messages(messages: List[BaseMessage]) int

Get the number of tokens in the messages.

Useful for checking if an input will fit in a model’s context window.

Parameters

messages (List[BaseMessage]) – The message inputs to tokenize.

Returns

The sum of the number of tokens across the messages.

Return type

int

get_token_ids(text: str) List[int]

Return the ordered ids of the tokens in a text.

Parameters

text (str) – The string input to tokenize.

Returns

A list of ids corresponding to the tokens in the text, in order they occur

in the text.

Return type

List[int]

predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

save(file_path: Union[Path, str]) None

Save the LLM.

Parameters

file_path (Union[Path, str]) – Path to file to save the LLM to.

Return type

None

Example: .. code-block:: python

llm.save(file_path=”path/llm.yaml”)

with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]

Not implemented on this class.

Parameters
  • schema (Union[Dict, Type[BaseModel]]) –

  • kwargs (Any) –

Return type

Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]

Examples using YandexGPT