langchain_google_vertexai.chat_models.ChatVertexAI

Note

ChatVertexAI implements the standard Runnable Interface. 🏃

class langchain_google_vertexai.chat_models.ChatVertexAI[source]

Bases: _VertexAICommon, BaseChatModel

Google Cloud Vertex AI chat model integration.

Setup:

You must have the langchain-google-vertexai Python package installed .. code-block:: bash

pip install -U langchain-google-vertexai

And either:
  • Have credentials configured for your environment (gcloud, workload identity, etc…)

  • Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable

This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.

For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAC and https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth.

Key init args — completion params:
model: str

Name of ChatVertexAI model to use. e.g. “gemini-1.5-flash-001”, “gemini-1.5-pro-001”, etc.

temperature: Optional[float]

Sampling temperature.

max_tokens: Optional[int]

Max number of tokens to generate.

stop: Optional[List[str]]

Default stop sequences.

safety_settings: Optional[Dict[vertexai.generative_models.HarmCategory, vertexai.generative_models.HarmBlockThreshold]]

The default safety settings to use for all generations.

Key init args — client params:
max_retries: int

Max number of retries.

credentials: Optional[google.auth.credentials.Credentials]

The default custom credentials to use when making API calls. If not provided, credentials will be ascertained from the environment.

project: Optional[str]

The default GCP project to use when making Vertex API calls.

location: str = “us-central1”

The default location to use when making API calls.

request_parallelism: int = 5

The amount of parallelism allowed for requests issued to VertexAI models. Default is 5.

base_url: Optional[str]

Base URL for API requests.

See full list of supported init args and their descriptions in the params section.

Instantiate:
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(
    model="gemini-1.5-flash-001",
    temperature=0,
    max_tokens=None,
    max_retries=6,
    stop=None,
    # other params...
)
Invoke:
messages = [
    ("system", "You are a helpful translator. Translate the user sentence to French."),
    ("human", "I love programming."),
]
llm.invoke(messages)
AIMessage(content="J'adore programmer.

“, response_metadata={‘is_blocked’: False, ‘safety_ratings’: [{‘category’: ‘HARM_CATEGORY_HATE_SPEECH’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_DANGEROUS_CONTENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_HARASSMENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_SEXUALLY_EXPLICIT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}], ‘citation_metadata’: None, ‘usage_metadata’: {‘prompt_token_count’: 17, ‘candidates_token_count’: 7, ‘total_token_count’: 24}}, id=’run-925ce305-2268-44c4-875f-dde9128520ad-0’)

Stream:
for chunk in llm.stream(messages):
    print(chunk)
AIMessageChunk(content='J', response_metadata={'is_blocked': False, 'safety_ratings': [], 'citation_metadata': None}, id='run-9df01d73-84d9-42db-9d6b-b1466a019e89')
AIMessageChunk(content="'adore programmer.
“, response_metadata={‘is_blocked’: False, ‘safety_ratings’: [{‘category’: ‘HARM_CATEGORY_HATE_SPEECH’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_DANGEROUS_CONTENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_HARASSMENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_SEXUALLY_EXPLICIT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}], ‘citation_metadata’: None}, id=’run-9df01d73-84d9-42db-9d6b-b1466a019e89’)

AIMessageChunk(content=’’, response_metadata={‘is_blocked’: False, ‘safety_ratings’: [], ‘citation_metadata’: None, ‘usage_metadata’: {‘prompt_token_count’: 17, ‘candidates_token_count’: 7, ‘total_token_count’: 24}}, id=’run-9df01d73-84d9-42db-9d6b-b1466a019e89’)

stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
    full += chunk
full
AIMessageChunk(content="J'adore programmer.

“, response_metadata={‘is_blocked’: False, ‘safety_ratings’: [{‘category’: ‘HARM_CATEGORY_HATE_SPEECH’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_DANGEROUS_CONTENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_HARASSMENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_SEXUALLY_EXPLICIT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}], ‘citation_metadata’: None, ‘usage_metadata’: {‘prompt_token_count’: 17, ‘candidates_token_count’: 7, ‘total_token_count’: 24}}, id=’run-b7f7492c-4cb5-42d0-8fc3-dce9b293b0fb’)

Async:
await llm.ainvoke(messages)

# stream:
# async for chunk in (await llm.astream(messages))

# batch:
# await llm.abatch([messages])
AIMessage(content="J'adore programmer.

“, response_metadata={‘is_blocked’: False, ‘safety_ratings’: [{‘category’: ‘HARM_CATEGORY_HATE_SPEECH’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_DANGEROUS_CONTENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_HARASSMENT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}, {‘category’: ‘HARM_CATEGORY_SEXUALLY_EXPLICIT’, ‘probability_label’: ‘NEGLIGIBLE’, ‘blocked’: False}], ‘citation_metadata’: None, ‘usage_metadata’: {‘prompt_token_count’: 17, ‘candidates_token_count’: 7, ‘total_token_count’: 24}}, id=’run-925ce305-2268-44c4-875f-dde9128520ad-0’)

Tool calling:
from langchain_core.pydantic_v1 import BaseModel, Field

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls
[{'name': 'GetWeather',
  'args': {'location': 'Los Angeles, CA'},
  'id': '2a2401fa-40db-470d-83ce-4e52de910d9e'},
 {'name': 'GetWeather',
  'args': {'location': 'New York City, NY'},
  'id': '96761deb-ab7f-4ef9-b4b4-6d44562fc46e'},
 {'name': 'GetPopulation',
  'args': {'location': 'Los Angeles, CA'},
  'id': '9147d532-abee-43a2-adb5-12f164300484'},
 {'name': 'GetPopulation',
  'args': {'location': 'New York City, NY'},
  'id': 'c43374ea-bde5-49ca-8487-5b83ebeea1e6'}]

See ChatVertexAI.bind_tools() method for more.

Structured output:
from typing import Optional

from langchain_core.pydantic_v1 import BaseModel, Field

class Joke(BaseModel):
    '''Joke to tell user.'''

    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")

structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
Joke(setup='What do you call a cat that loves to bowl?', punchline='An alley cat!', rating=None)

See ChatVertexAI.with_structured_output() for more.

Image input:
import base64
import httpx
from langchain_core.messages import HumanMessage

image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
message = HumanMessage(
    content=[
        {"type": "text", "text": "describe the weather in this image"},
        {
            "type": "image_url",
            "image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
        },
    ],
)
ai_msg = llm.invoke([message])
ai_msg.content
'The weather in this image appears to be sunny and pleasant. The sky is a bright blue with scattered white clouds, suggesting a clear and mild day. The lush green grass indicates recent rainfall or sufficient moisture. The absence of strong shadows suggests that the sun is high in the sky, possibly late afternoon. Overall, the image conveys a sense of tranquility and warmth, characteristic of a beautiful summer day.

You can also point to GCS files which is faster / more efficient because bytes are transferred back and forth.

llm.invoke(
    [
        HumanMessage(
            [
                "What's in the image?",
                {
                    "type": "media",
                    "file_uri": "gs://cloud-samples-data/generative-ai/image/scones.jpg",
                    "mime_type": "image/jpeg",
                },
            ]
        )
    ]
).content
'The image is of five blueberry scones arranged on a piece of baking paper.

Here is a list of what is in the picture: * Five blueberry scones: They are scattered across the parchment paper, dusted with powdered sugar. * Two cups of coffee: Two white cups with saucers. One appears full, the other partially drunk. * A bowl of blueberries: A brown bowl is filled with fresh blueberries, placed near the scones. * A spoon: A silver spoon with the words “Let’s Jam” rests on the paper. * Pink peonies: Several pink peonies lie beside the scones, adding a touch of color. * Baking paper: The scones, cups, bowl, and spoon are arranged on a piece of white baking paper, splattered with purple. The paper is crinkled and sits on a dark surface.

The image has a rustic and delicious feel, suggesting a cozy and enjoyable breakfast or brunch setting. ‘

Video input:

NOTE: Currently only supported for gemini-...-vision models.

llm = ChatVertexAI(model="gemini-1.0-pro-vision")

llm.invoke(
    [
        HumanMessage(
            [
                "What's in the video?",
                {
                    "type": "media",
                    "file_uri": "gs://cloud-samples-data/video/animals.mp4",
                    "mime_type": "video/mp4",
                },
            ]
        )
    ]
).content
'The video is about a new feature in Google Photos called "Zoomable Selfies". The feature allows users to take selfies with animals at the zoo. The video shows several examples of people taking selfies with animals, including a tiger, an elephant, and a sea otter. The video also shows how the feature works. Users simply need to open the Google Photos app and select the "Zoomable Selfies" option. Then, they need to choose an animal from the list of available animals. The app will then guide the user through the process of taking the selfie.'
Audio input:
from langchain_core.messages import HumanMessage

llm = ChatVertexAI(model="gemini-1.5-flash-001")

llm.invoke(
    [
        HumanMessage(
            [
                "What's this audio about?",
                {
                    "type": "media",
                    "file_uri": "gs://cloud-samples-data/generative-ai/audio/pixel.mp3",
                    "mime_type": "audio/mpeg",
                },
            ]
        )
    ]
).content
"This audio is an interview with two product managers from Google who work on Pixel feature drops. They discuss how feature drops are important for showcasing how Google devices are constantly improving and getting better. They also discuss some of the highlights of the January feature drop and the new features coming in the March drop for Pixel phones and Pixel watches. The interview concludes with discussion of how user feedback is extremely important to them in deciding which features to include in the feature drops. "
Token usage:
ai_msg = llm.invoke(messages)
ai_msg.usage_metadata
{'input_tokens': 17, 'output_tokens': 7, 'total_tokens': 24}
Response metadata
ai_msg = llm.invoke(messages)
ai_msg.response_metadata
{'is_blocked': False,
 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False},
  {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False},
  {'category': 'HARM_CATEGORY_HARASSMENT',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False},
  {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False}],
 'usage_metadata': {'prompt_token_count': 17,
  'candidates_token_count': 7,
  'total_token_count': 24}}
Safety settings
from langchain_google_vertexai import HarmBlockThreshold, HarmCategory

llm = ChatVertexAI(
    model="gemini-1.5-pro",
    safety_settings={
        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
        HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
        HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
    },
)

llm.invoke(messages).response_metadata
{'is_blocked': False,
 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False},
  {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False},
  {'category': 'HARM_CATEGORY_HARASSMENT',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False},
  {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
   'probability_label': 'NEGLIGIBLE',
   'blocked': False}],
 'usage_metadata': {'prompt_token_count': 17,
  'candidates_token_count': 7,
  'total_token_count': 24}}

Needed for mypy typing to recognize model_name as a valid arg.

param additional_headers: Optional[Dict[str, str]] = None

A key-value dictionary representing additional headers for the model call

param api_endpoint: Optional[str] = None (alias 'base_url')

Desired API endpoint, e.g., us-central1-aiplatform.googleapis.com

param api_transport: Optional[str] = None

The desired API transport method, can be either ‘grpc’ or ‘rest’

param cache: Union[BaseCache, bool, None] = None

Whether to cache the response.

  • If true, will use the global cache.

  • If false, will not use a cache

  • If None, will use the global cache if it’s set, otherwise no cache.

  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

param callback_manager: Optional[BaseCallbackManager] = None

[DEPRECATED] Callback manager to add to the run trace.

param callbacks: Callbacks = None

Callbacks to add to the run trace.

param client_cert_source: Optional[Callable[[], Tuple[bytes, bytes]]] = None

A callback which returns client certificate bytes and private key bytes both

param convert_system_message_to_human: bool = False

[Deprecated] Since new Gemini models support setting a System Message, setting this parameter to True is discouraged.

param credentials: Any = None

The default custom credentials (google.auth.credentials.Credentials) to use

param custom_get_token_ids: Optional[Callable[[str], List[int]]] = None

Optional encoder to use for counting tokens.

param examples: Optional[List[BaseMessage]] = None
param full_model_name: Optional[str] = None

The full name of the model’s endpoint.

param location: str = 'us-central1'

The default location to use when making API calls.

param max_output_tokens: Optional[int] = None (alias 'max_tokens')

Token limit determines the maximum amount of text output from one prompt.

param max_retries: int = 6

The maximum number of retries to make when generating.

param metadata: Optional[Dict[str, Any]] = None

Metadata to add to the run trace.

param model_name: str = 'chat-bison' (alias 'model')

Underlying model name.

param n: int = 1

How many completions to generate for each prompt.

param project: Optional[str] = None

The default GCP project to use when making Vertex API calls.

param request_parallelism: int = 5

The amount of parallelism allowed for requests issued to VertexAI models.

param response_mime_type: Optional[str] = None
Optional. Output response mimetype of the generated candidate text. Only
supported in Gemini 1.5 and later models. Supported mimetype:
  • “text/plain”: (default) Text output.

  • “application/json”: JSON response in the candidates.

The model also needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.

param safety_settings: Optional['SafetySettingsType'] = None

The default safety settings to use for all generations.

For example:

from langchain_google_vertexai import HarmBlockThreshold, HarmCategory

safety_settings = {

HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,

}

param stop: Optional[List[str]] = None (alias 'stop_sequences')

Optional list of stop words to use when generating.

param streaming: bool = False

Whether to stream the results or not.

param tags: Optional[List[str]] = None

Tags to add to the run trace.

param temperature: Optional[float] = None

Sampling temperature, it controls the degree of randomness in token selection.

param top_k: Optional[int] = None

How the model selects tokens for output, the next token is selected from

param top_p: Optional[float] = None

Tokens are selected from most probable to least until the sum of their

param tuned_model_name: Optional[str] = None

The name of a tuned model. If tuned_model_name is passed model_name will be used to determine the model family

param verbose: bool [Optional]

Whether to print out response text.

__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
Return type

BaseMessage

async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, run_id: Optional[UUID] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • run_name (Optional[str]) –

  • run_id (Optional[UUID]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

bind_tools(tools: Sequence[Union[Tool, Tool, _ToolDictLike, BaseTool, Type[BaseModel], FunctionDescription, Callable, FunctionDeclaration, Dict[str, Any]]], tool_config: Optional[_ToolConfigDict] = None, *, tool_choice: Optional[Union[dict, List[str], str, Literal['auto', 'none', 'any'], Literal[True], bool]] = None, **kwargs: Any) Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], BaseMessage][source]

Bind tool-like objects to this chat model.

Assumes model is compatible with Vertex tool-calling API.

Parameters
  • tools (Sequence[Union[Tool, Tool, _ToolDictLike, BaseTool, Type[BaseModel], FunctionDescription, Callable, FunctionDeclaration, Dict[str, Any]]]) – A list of tool definitions to bind to this chat model. Can be a pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation.

  • **kwargs (Any) – Any additional parameters to pass to the Runnable constructor.

  • tool_config (Optional[_ToolConfigDict]) –

  • tool_choice (Optional[Union[dict, List[str], str, Literal['auto', 'none', 'any'], ~typing.Literal[True], bool]]) –

  • **kwargs

Return type

Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], BaseMessage]

call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • message (str) –

  • stop (Optional[List[str]]) –

  • kwargs (Any) –

Return type

str

generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, run_id: Optional[UUID] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • run_name (Optional[str]) –

  • run_id (Optional[UUID]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

get_num_tokens(text: str) int[source]

Get the number of tokens present in the text.

Parameters

text (str) –

Return type

int

get_num_tokens_from_messages(messages: List[BaseMessage]) int

Get the number of tokens in the messages.

Useful for checking if an input will fit in a model’s context window.

Parameters

messages (List[BaseMessage]) – The message inputs to tokenize.

Returns

The sum of the number of tokens across the messages.

Return type

int

get_token_ids(text: str) List[int]

Return the ordered ids of the tokens in a text.

Parameters

text (str) – The string input to tokenize.

Returns

A list of ids corresponding to the tokens in the text, in order they occur

in the text.

Return type

List[int]

predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

with_structured_output(schema: Union[Dict, Type[BaseModel]], *, include_raw: bool = False, **kwargs: Any) Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]][source]

Model wrapper that returns outputs formatted to match the given schema.

Parameters
  • schema (Union[Dict, Type[BaseModel]]) – The output schema as a dict or a Pydantic class. If a Pydantic class then the model output will be an object of that class. If a dict then the model output will be a dict. With a Pydantic class the returned attributes will be validated, whereas with a dict they will not be. If method is “function_calling” and schema is a dict, then the dict must match the OpenAI function-calling spec.

  • include_raw (bool) – If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys “raw”, “parsed”, and “parsing_error”.

  • kwargs (Any) –

Returns

A Runnable that takes any ChatModel input. If include_raw is True then a dict with keys — raw: BaseMessage, parsed: Optional[_DictOrPydantic], parsing_error: Optional[BaseException]. If include_raw is False then just _DictOrPydantic is returned, where _DictOrPydantic depends on the schema. If schema is a Pydantic class then _DictOrPydantic is the Pydantic class. If schema is a dict then _DictOrPydantic is a dict.

Return type

Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, List[str], Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]

Example: Pydantic schema, exclude raw:
from langchain_core.pydantic_v1 import BaseModel
from langchain_google_vertexai import ChatVertexAI

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = ChatVertexAI(model_name="gemini-pro", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> AnswerWithJustification(
#     answer='They weigh the same.', justification='A pound is a pound.'
# )
Example: Pydantic schema, include raw:
from langchain_core.pydantic_v1 import BaseModel
from langchain_google_vertexai import ChatVertexAI

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = ChatVertexAI(model_name="gemini-pro", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification, include_raw=True)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
#     'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
#     'parsing_error': None
# }
Example: Dict schema, exclude raw:
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.utils.function_calling import convert_to_openai_function
from langchain_google_vertexai import ChatVertexAI

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

dict_schema = convert_to_openai_function(AnswerWithJustification)
llm = ChatVertexAI(model_name="gemini-pro", temperature=0)
structured_llm = llm.with_structured_output(dict_schema)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }
property async_prediction_client: PredictionServiceAsyncClient

Returns PredictionServiceClient.

property prediction_client: PredictionServiceClient

Returns PredictionServiceClient.

task_executor: ClassVar[Optional[Executor]] = FieldInfo(exclude=True, extra={})

Examples using ChatVertexAI