langchain_community.chat_models.kinetica.ChatKinetica

Note

ChatKinetica implements the standard Runnable Interface. 🏃

class langchain_community.chat_models.kinetica.ChatKinetica[source]

Bases: BaseChatModel

Kinetica LLM Chat Model API.

Prerequisites for using this API:

  • The gpudb and typeguard packages installed.

  • A Kinetica DB instance.

  • Kinetica host specified in KINETICA_URL

  • Kinetica login specified KINETICA_USER, and KINETICA_PASSWD.

  • An LLM context that specifies the tables and samples to use for inferencing.

This API is intended to interact with the Kinetica SqlAssist LLM that supports generation of SQL from natural language.

In the Kinetica LLM workflow you create an LLM context in the database that provides information needed for infefencing that includes tables, annotations, rules, and samples. Invoking load_messages_from_context() will retrieve the contxt information from the database so that it can be used to create a chat prompt.

The chat prompt consists of a SystemMessage and pairs of HumanMessage/AIMessage that contain the samples which are question/SQL pairs. You can append pairs samples to this list but it is not intended to facilitate a typical natural language conversation.

When you create a chain from the chat prompt and execute it, the Kinetica LLM will generate SQL from the input. Optionally you can use KineticaSqlOutputParser to execute the SQL and return the result as a dataframe.

The following example creates an LLM using the environment variables for the Kinetica connection. This will fail if the API is unable to connect to the database.

Example

from langchain_community.chat_models.kinetica import KineticaChatLLM
kinetica_llm = KineticaChatLLM()

If you prefer to pass connection information directly then you can create a connection using KineticaUtil.create_kdbc().

Example

from langchain_community.chat_models.kinetica import (
    KineticaChatLLM, KineticaUtil)
kdbc = KineticaUtil._create_kdbc(url=url, user=user, passwd=passwd)
kinetica_llm = KineticaChatLLM(kdbc=kdbc)

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param cache: Union[BaseCache, bool, None] = None

Whether to cache the response.

  • If true, will use the global cache.

  • If false, will not use a cache

  • If None, will use the global cache if it’s set, otherwise no cache.

  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

param callback_manager: Optional[BaseCallbackManager] = None

[DEPRECATED] Callback manager to add to the run trace.

param callbacks: Callbacks = None

Callbacks to add to the run trace.

param custom_get_token_ids: Optional[Callable[[str], List[int]]] = None

Optional encoder to use for counting tokens.

param kdbc: Any = None

Kinetica DB connection.

param metadata: Optional[Dict[str, Any]] = None

Metadata to add to the run trace.

param tags: Optional[List[str]] = None

Tags to add to the run trace.

param verbose: bool [Optional]

Whether to print out response text.

__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
Return type

BaseMessage

async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, run_id: Optional[UUID] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • run_name (Optional[str]) –

  • run_id (Optional[UUID]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) LLMResult

Asynchronously pass a sequence of prompts and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

bind_tools(tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]], **kwargs: Any) Runnable[LanguageModelInput, BaseMessage]
Parameters
  • tools (Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]]) –

  • kwargs (Any) –

Return type

Runnable[LanguageModelInput, BaseMessage]

call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • message (str) –

  • stop (Optional[List[str]]) –

  • kwargs (Any) –

Return type

str

generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, run_id: Optional[UUID] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (Optional[List[str]]) –

  • metadata (Optional[Dict[str, Any]]) –

  • run_name (Optional[str]) –

  • run_id (Optional[UUID]) –

  • **kwargs

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) LLMResult

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type

LLMResult

get_num_tokens(text: str) int

Get the number of tokens present in the text.

Useful for checking if an input will fit in a model’s context window.

Parameters

text (str) – The string input to tokenize.

Returns

The integer number of tokens in the text.

Return type

int

get_num_tokens_from_messages(messages: List[BaseMessage]) int

Get the number of tokens in the messages.

Useful for checking if an input will fit in a model’s context window.

Parameters

messages (List[BaseMessage]) – The message inputs to tokenize.

Returns

The sum of the number of tokens across the messages.

Return type

int

get_token_ids(text: str) List[int]

Return the ordered ids of the tokens in a text.

Parameters

text (str) – The string input to tokenize.

Returns

A list of ids corresponding to the tokens in the text, in order they occur

in the text.

Return type

List[int]

load_messages_from_context(context_name: str) List[source]

Load a lanchain prompt from a Kinetica context.

A Kinetica Context is an object created with the Kinetica Workbench UI or with SQL syntax. This function will convert the data in the context to a list of messages that can be used as a prompt. The messages will contain a SystemMessage followed by pairs of HumanMessage/AIMessage that contain the samples.

Parameters

context_name (str) – The name of an LLM context in the database.

Returns

A list of messages containing the information from the context.

Return type

List

classmethod load_messages_from_datafile(sa_datafile: Path) List[BaseMessage][source]

Load a lanchain prompt from a Kinetica context datafile.

Parameters

sa_datafile (Path) –

Return type

List[BaseMessage]

predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) str

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • text (str) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

str

predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) BaseMessage

[Deprecated]

Notes

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters
  • messages (List[BaseMessage]) –

  • stop (Optional[Sequence[str]]) –

  • kwargs (Any) –

Return type

BaseMessage

with_structured_output(schema: Union[Dict, Type[BaseModel]], *, include_raw: bool = False, **kwargs: Any) Runnable[LanguageModelInput, Union[Dict, BaseModel]]

Model wrapper that returns outputs formatted to match the given schema.

Parameters
  • schema (Union[Dict, Type[BaseModel]]) – The output schema as a dict or a Pydantic class. If a Pydantic class then the model output will be an object of that class. If a dict then the model output will be a dict. With a Pydantic class the returned attributes will be validated, whereas with a dict they will not be. If method is “function_calling” and schema is a dict, then the dict must match the OpenAI function-calling spec.

  • include_raw (bool) – If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys “raw”, “parsed”, and “parsing_error”.

  • kwargs (Any) –

Returns

If include_raw is True then a dict with keys:

raw: BaseMessage parsed: Optional[_DictOrPydantic] parsing_error: Optional[BaseException]

If include_raw is False then just _DictOrPydantic is returned, where _DictOrPydantic depends on the schema:

If schema is a Pydantic class then _DictOrPydantic is the Pydantic

class.

If schema is a dict then _DictOrPydantic is a dict.

Return type

A Runnable that takes any ChatModel input and returns as output

Example: Function-calling, Pydantic schema (method=”function_calling”, include_raw=False):
from langchain_core.pydantic_v1 import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = ChatModel(model="model-name", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: Function-calling, Pydantic schema (method=”function_calling”, include_raw=True):
from langchain_core.pydantic_v1 import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = ChatModel(model="model-name", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification, include_raw=True)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
#     'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
#     'parsing_error': None
# }
Example: Function-calling, dict schema (method=”function_calling”, include_raw=False):
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.utils.function_calling import convert_to_openai_tool

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

dict_schema = convert_to_openai_tool(AnswerWithJustification)
llm = ChatModel(model="model-name", temperature=0)
structured_llm = llm.with_structured_output(dict_schema)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }

Examples using ChatKinetica